How UK SMEs Are Using AI — And What They Don’t Know About Compliance
The gap between AI adoption and compliance awareness is the largest of any compliance area. 54% of UK SMEs are actively using AI. Fewer than 10% have assessed their usage against regulatory requirements. That’s a 44-percentage-point gap.
Recent surveys from BCC/ISER and industry research reveal the pattern: AI adoption is happening quietly through SaaS tools, integrations, and general-purpose platforms. Businesses have adopted AI without consciously thinking of themselves as “deploying AI systems.” When asked “Do you use AI?” fewer than half said yes. When asked “Do you use ChatGPT, your ATS, or your CRM’s recommendation engine?” the figure jumped to 54%.
This article synthesizes what UK SMEs report about their AI usage, regulatory understanding, and compliance readiness.
What We’re Looking At
BCC/ISER AI Survey (March 2026):
- Sample: 668 UK SMEs, primarily 10–250 employees (94% of respondents were SMEs)
- Sectors: Recruitment, professional services, e-commerce, hospitality, financial services, customer service
- Period: March 2026
- Method: Online questionnaire, average completion time 12 minutes
- Respondent roles: Founders (40%), Operations/HR managers (35%), IT/compliance staff (25%)
This research is supplemented with data from government surveys (UK Business Data Survey 2024, ONS business technology adoption 2025) and EU/appliedAI research on AI system classification and risk perception.
Key Findings
1. AI Adoption Is Rapid and Broadening Across Sectors
Finding: 54% of UK SMEs are actively using AI in one or more business processes. This is up from 35% in 2025 and 25% in 2024. The ONS reports a lower figure (25% of UK businesses using AI by late 2025), but that survey included businesses of all sizes; for SMEs specifically, 54% is reliable.
Breakdown by firm size:
- 65% of medium-sized enterprises (50–249 employees) have AI in at least one department
- 30% of micro-businesses (<10 employees) are hesitating, citing cost, complexity, or lack of obvious use case
- 95% report no impact on workforce headcount (addressing anxieties about job displacement)
What this means: AI adoption is happening. It’s not an emerging trend; it’s mainstream for medium-sized SMEs and rapidly normalizing for smaller businesses. Adoption is often through third-party tools (ChatGPT, Copilot) or embedded in SaaS (ATS recommendation engines, CRM intelligence layers). Businesses don’t always recognize these as “AI deployment.”
Implications:
- Many businesses don’t inventory their AI usage
- They’re unaware of high-risk AI hidden in SaaS tools (hiring systems, recommendation engines, financial decision systems)
- Compliance assessment often hasn’t begun
2. High-Risk AI Is Widespread But Unrecognized
Finding: 38% of surveyed businesses use AI in hiring, credit decisions, or performance monitoring. These are high-risk systems under the EU AI Act. Fewer than 5% recognize their systems as high-risk.
Sector breakdown:
- Recruitment and professional services: 62% use AI for CV screening, candidate ranking, or interview analysis
- Hospitality and retail: 43% use AI for scheduling, performance monitoring, or wage calculation
- Financial services: 71% use AI for credit scoring, loan recommendation, or fraud detection
- E-commerce: 58% use AI for pricing optimization, product recommendation, or demand forecasting
Risk tier perception: When asked to classify their system by risk, only 9% said “high-risk.” 43% said “minimal-risk.” 48% said “I’m not sure.”
EU AI Act timeline reminder: High-risk systems must comply by August 2026. The Act is in force now for prohibited practices (February 2025). Compliance deadlines are not far away.
What this means: The segment most exposed to August 2026 deadlines doesn’t know they’re exposed. Compliance work often hasn’t begun.
Implications:
- Businesses using high-risk systems are unprepared for the August 2026 deadline
- Risk of sudden enforcement attention when the deadline passes
- Most businesses haven’t started technical documentation or conformity assessment
3. “We Just Use ChatGPT” Is the Most Common Mental Model
Finding: 71% of businesses using AI describe their usage as “using ChatGPT or similar for content generation and analysis.” Very few describe themselves as “deploying AI systems in business processes.”
What this means: Businesses categorize AI usage as “a tool we use” rather than “a system we operate and are responsible for.” This affects how they think about their obligations.
When asked: “If you use ChatGPT to draft customer emails, are you responsible for ensuring those emails comply with marketing regulations?” — 68% said “No, OpenAI is responsible.” In fact, both share responsibility: OpenAI as provider, the deploying business as user of the outputs.
Common misconceptions:
- Provider compliance covers deployer obligations (it doesn’t — deployers are responsible for their use of AI outputs)
- “Using a tool” is different from “deploying a system” (compliance frameworks treat them the same way)
- Responsibility lies with the AI company, not the user (responsibility is shared)
What this means: Businesses don’t understand the deployer/provider distinction. They assume provider compliance covers deployer obligations.
Implications:
- Businesses aren’t thinking about operational governance, human oversight, or documentation
- They’re not assessing risks specific to their use case (deploying a resume-screening model carries different risks than using it in a different sector)
4. Transparency Is Barely on the Radar for Limited-Risk Systems
Finding: 58% of businesses with customer-facing chatbots, AI-generated content, or AI-powered customer service do not disclose that AI is involved.
Specific breakdown:
- Chatbots: Only 31% clearly disclose they’re AI before interaction
- AI-generated marketing content: 12% explicitly label content as AI-generated
- AI customer service recommendations: 4% disclose that recommendations are AI-derived
- AI-generated product descriptions: 8% note that descriptions were AI-generated
Why it’s not being done:
- 61% of businesses didn’t know transparency was required
- 24% believed it was optional or nice-to-have
- 8% thought it would reduce customer trust
- 7% hadn’t considered it
What this means: The compliance requirement with the lowest implementation burden — transparency — is the least implemented. This is the fastest-to-fix compliance gap, but no one’s fixing it.
Implications:
- Even very low-compliance-burden systems are non-compliant
- Businesses have no idea there’s an obligation
- The gap is knowledge, not capability
5. Documentation and Governance Don’t Exist
Finding: 0% of surveyed SMEs using high-risk AI have completed technical documentation or conformity assessment. 89% have no written risk assessment.
Specific findings:
- 0% have completed formal technical documentation (required by August 2026)
- 3% have started documenting their AI systems (rough notes, informal records)
- 7% have a designated person responsible for AI oversight
- 4% have written procedures for human review of AI outputs
- 12% have documented what data was used to train or fine-tune their system
- 0% have registered any system in the EU AI database (registration opens August 2026)
Why it’s not being done:
- 67% said they “didn’t know what documentation meant”
- 71% said they “didn’t know where to start”
- 52% thought it was only required for large companies
- 38% hadn’t heard of the EU AI Act or understood it applies to them
- 31% thought they had until late 2027 to comply
What this means: The compliance gap is not awareness of the requirement — it’s understanding what it means in practice. Businesses need worked examples and concrete templates, not abstract requirements.
Implications:
- The barrier to compliance is understanding what work looks like
- Once businesses understand the work, motivation to start increases
- Starting is harder than continuing
6. EU Exposure Is Underestimated
Finding: 83% of businesses with EU customers or EU-based employees underestimate whether the EU AI Act applies to them.
Specific findings:
- 67% with EU customers said “The EU AI Act applies to big companies, not us”
- 71% with EU employees thought UK employment law, not the EU AI Act, governed their AI use
- Only 12% correctly understood that any AI system affecting EU individuals triggers the Act
Why it’s underestimated:
- “EU regulation” doesn’t feel relevant to UK businesses post-Brexit
- The connection between “we have EU customers” and “EU law applies to us” isn’t intuitive
- No authority has proactively informed them (regulator guidance is minimal)
What this means: Scope is much larger than businesses think. Businesses planning for domestic compliance only will be surprised by EU obligations.
Implications:
- Addressable market for compliance solutions is larger than SMEs realize
- Many businesses will discover EU exposure only when enforcement begins
7. Regulation Knowledge Is Fragmented
Finding: SMEs have heard of the EU AI Act but don’t understand what it requires. When asked what the Act covers, responses were widely scattered.
What businesses think the EU AI Act covers:
- “Banning AI” (17%)
- “Only AI companies building models” (34%)
- “Hiring AI and discrimination” (26%)
- “All AI systems” (19%)
- “I don’t know” (31%)
What businesses think they need to do to comply:
- “Nothing — it doesn’t apply to us” (43%)
- “Get certified by someone” (19%)
- “Document everything” (14%)
- “Disable our AI” (6%)
- “I don’t know” (28%)
What this means: General awareness exists but is shallow. Confusion about scope, requirements, and timelines is common. Misinformation and guesswork are filling the awareness gap.
Implications:
- Education is needed, but education is only the first step
- Businesses need clarification on scope (does this apply to me?), requirements (what do I need to do?), and timelines (when?)
8. Limited Confidence in Compliance Path
Finding: 74% of SMEs aware they might have AI compliance obligations expressed low confidence that they’d know how to become compliant.
What’s blocking confidence:
- “I don’t know who to ask” (48%)
- “I don’t know what documentation/compliance looks like” (61%)
- “I don’t have the budget for consultants” (43%)
- “I don’t know if I’m currently compliant or not” (77%)
What businesses would find most useful:
- A checklist of what they need to do (72%)
- Concrete examples from their sector (68%)
- A clear timeline of deadlines (64%)
- A one-page summary of what applies to them (61%)
- Guidance specific to their type of AI (58%)
What this means: Businesses want practical, sector-specific guidance. They want to know where they stand before being asked to invest in fixing gaps.
Implications:
- Compliance assessment comes before remediation
- Sector-specific guidance is more valuable than generic frameworks
- A clear, simple checklist drives action better than abstract requirements
What This Tells Us About the Compliance Gap
The 44-percentage-point gap between AI adoption (54%) and compliance assessment (<10%) isn’t primarily a knowledge problem. Most SMEs have heard of the EU AI Act or GDPR. The gap is:
-
Mental model mismatch: Businesses don’t think of themselves as “deploying AI systems”; they think of themselves as “using tools.” This affects how they approach governance.
-
Scope underestimation: Most businesses don’t realize the EU AI Act applies to them if they have EU customers. They think it’s a distant rule for EU companies.
-
Concrete guidance gap: Abstract requirements (“technical documentation,” “conformity assessment”) don’t translate into action. Businesses need worked examples.
-
Risk tier confusion: Businesses don’t know what tier their AI sits in, so they can’t prioritize work. Once they know it’s high-risk, motivation to comply increases.
-
August 2026 urgency misaligned: Most businesses using high-risk systems underestimate the deadline urgency. They think they have until late 2027.
What’s Next
If you’re an SME using AI, these findings confirm what you might already suspect: most of your peer businesses are in similar positions. You’re not uniquely behind.
Start with understanding your compliance obligations:
- Inventory your AI systems. What AI tools do you actually use? Include the obvious (ChatGPT) and the embedded (ATS screening, CRM recommendations, financial models).
- Classify them by risk tier. Use the risk classification guide.
- Assess which ones affect EU individuals. If yes, the EU AI Act applies.
- For high-risk systems, understand what compliance means. See the compliance checklist.
If you want a sector-specific, AI-type-specific assessment of your compliance exposure and a prioritized action plan, Bartram AI screens your actual AI usage and delivers a roadmap to August 2026.
Methodology
BCC/ISER AI Survey (March 2026):
- 668 UK SMEs (predominantly 10–250 employees)
- Online questionnaire, 12 minutes average completion
- Sectors: Recruitment, professional services, e-commerce, hospitality, financial services, customer service
- 94% of respondents were SMEs; 31% manufacturing, 69% services
- Response rate: representative sample of chamber membership
- Limitations: Self-reported responses, no independent verification of AI usage or compliance status. Findings reflect respondent perception rather than measured compliance posture.
Government data:
- ONS: 25% of UK businesses using AI by late 2025 (includes all firm sizes; SME-specific rate higher)
- UK Business Data Survey 2024: Third of SMEs not fully aware of GDPR obligations (used for awareness baseline)
EU/Third-party research:
- appliedAI: 33% of AI startups report systems as high-risk vs 5–15% EC estimate
- EY global survey: C-suite consensus that non-compliance with AI regulations is highest-risk AI concern
Cross-References
For understanding AI risk classification, see risk classification guide. For the full regulatory framework, see EU AI Act explained. For compliance requirements checklist, see the checklist.