EU AI Act Compliance Checklist for UK Businesses
Use this checklist to measure where you stand. Items are organized by compliance category and priority level. High priority: Required by August 2026 and directly tied to regulatory deadlines. Medium priority: Important for full compliance but somewhat more flexible on timeline. Low priority: Useful for governance but not enforcement-focused.
All items assume you have already identified your AI systems and classified them by risk tier. If you haven’t, start with risk classification first.
Inventory and Classification
Priority: HIGH
- Listed all AI systems in your business (including ChatGPT, Claude, ATS, recommendation engines, email categorisation, content generation, anything with AI)
- Classified each system as: Unacceptable Risk / High Risk / Limited Risk / Minimal Risk
- Identified which systems affect EU individuals (EU customers, EU employees, EU job candidates)
- For each high-risk system: documented the category it falls into (hiring, credit, insurance, education, law enforcement, immigration, critical infrastructure)
- Documented your classification decisions in a record (date, reasoning, risk tier, EU exposure, current compliance status)
High-Risk Systems: Governance and Documentation
Priority: HIGH — Required by August 2026
Conformity Assessment
- Conducted a conformity assessment for each high-risk system
- Documented that the system complies with EU AI Act high-risk requirements
- Assessed whether third-party conformity assessment is needed (optional but strengthens compliance posture)
- Maintained records of the assessment (for regulator review if needed)
Quality Management
- Established a documented quality management system covering:
- Development and deployment processes
- Monitoring and performance evaluation procedures
- Incident identification and reporting processes
- Data curation and validation procedures
- Regular review and update cycles
Risk Management
- Conducted a risk assessment for each high-risk system:
- Identified potential harms (discrimination, bias, accuracy failures, security breaches)
- Assessed likelihood and impact of each harm
- Documented mitigation strategies
- Identified residual risks that cannot be eliminated
- Documented the risk assessment in writing
- Reviewed and updated the assessment annually or when system changes
Human Oversight
- Designated one or more persons responsible for high-risk AI system oversight
- Documented the oversight responsibility (job title, reporting line, specific authorities)
- Ensured the designated person(s):
- Have authority to override or stop the system
- Understand the system’s technical capabilities and limitations
- Monitor outputs regularly (frequency depends on risk, but at least monthly for hiring/credit systems)
- Can identify when the system is behaving unexpectedly
- Have documented procedures for escalation and intervention
Data Governance
- For training data:
- Documented data sources and provenance
- Assessed data quality and completeness
- Tested for bias against protected characteristics (particularly for hiring and credit systems)
- Maintained records of data curation and selection decisions
- For operational data:
- Ensure data is accurate and up-to-date
- Implement procedures to correct inaccurate data
- Document data retention and deletion procedures
Technical Documentation
- Created technical documentation including:
- System name, version, date of deployment
- General description (what the system does, in non-technical language)
- Intended purpose (what problem it solves, who uses it, in what context)
- Technical specifications (model architecture, input/output types, accuracy metrics)
- Limitations and known failure modes (accuracy on specific populations, performance boundaries)
- Training data used (description, sources, quality assessment)
- Testing and validation results
- Human oversight mechanisms
- Instructions for users (deployers using your system)
- Post-market monitoring procedures
- Made documentation available for regulatory review if needed
- Ensured documentation is updated when system changes
Incident Reporting
- Established procedures for identifying and reporting serious incidents:
- What qualifies as a “serious incident” (significant harm to individuals, systemic failures, discrimination)
- Who reports (oversight person, management)
- Reporting timeline (as soon as identified, definitely before the system continues operating with the risk)
- What information is documented (incident description, affected individuals, harm caused, root cause, remediation)
- Maintained incident logs
- Identified the relevant authority to report to (EU member state competent authority, ICO for UK-related incidents)
EU AI Database Registration
- Registered each high-risk system in the EU AI database
- System name, version, date of deployment
- Provider and deployer contact details
- Intended purpose and geographical scope
- Risk category (hiring, credit, insurance, etc.)
- Conformity assessment status
- Ensured registration is completed before system deployment (not after)
- Updated registration if system details change
Limited-Risk Systems: Transparency
Priority: HIGH — Should be in place now; definitely by August 2026
Disclosure Mechanisms
-
Chatbots and virtual assistants:
- Users are clearly informed the interaction is with an AI system
- Disclosure appears before or at the start of interaction (not buried in terms and conditions)
- Disclosure is clear and unambiguous (not vague like “we use AI to improve service”)
- Tested on desktop and mobile devices
- Tested with real users to ensure they understand it’s AI
-
AI-generated content (text, images):
- Content is labelled as AI-generated (e.g., “Written with AI assistance,” “AI-generated image”)
- Label is visible (not hidden in fine print or metadata)
- Label is on every piece of content or clearly applies to a section (not just once on the page)
- Label is appropriate to the content type (on-screen for website content, in email body for emails, in video description for video)
-
Customer service AI:
- AI-generated responses to customers disclose they’re AI-generated
- Automated customer responses clearly state they’re not from a human
- If AI suggests responses that are then sent by humans, reviewed and edited by humans, this may not need disclosure (grey area; err toward disclosure)
Transparency Testing
- Tested each AI system from a user perspective (as if you’re a customer)
- Confirmed the AI disclosure is visible and clear
- Tested that the system behaves consistently with what the disclosure says
- If disclosure says “This chatbot can’t process orders,” confirm it doesn’t attempt to process orders
- If it says “Responses will be reviewed by a human,” confirm that happens
- If disclosure is optional (e.g., using AI internally), documented your reasoning for not disclosing
EU Market Exposure Assessment
Priority: MEDIUM
- Identified which AI systems serve EU customers or affect EU residents
- Documented the scope of EU exposure for each system:
- How many EU customers or users?
- Which EU countries are affected?
- What data is processed?
- For systems with EU exposure, confirmed EU AI Act applies
- Assessed whether UK GDPR and/or sector-specific UK rules also apply (they likely do)
Provider-Deployer Responsibility Mapping
Priority: MEDIUM
- For each AI system, identified whether you are a provider, deployer, or both
- Reviewed AI provider terms of service and acceptable use policy:
- Confirmed you are using the system within its intended purpose
- Identified any restrictions on use (e.g., don’t use for hiring decisions without human oversight)
- Checked data processing terms (is provider processing EU data? Do they have appropriate safeguards?)
- For systems where you’re the deployer, documented that you are not relying on provider compliance to cover deployer obligations
- Ensured you have a data processing agreement with the provider if they process personal data on your behalf
UK Domestic Compliance
Priority: MEDIUM — Overlaps with but is separate from EU AI Act compliance
-
UK GDPR Article 22 (Automated Decision-Making):
- Identified AI systems that make decisions with legal or significant effects on individuals
- For those systems: implemented transparency (told affected individuals)
- Provided the right to human review (individuals can ask for the decision to be reconsidered by a human)
- Ensured a lawful basis for the automated decision exists
-
Equality Act (Discrimination):
- Assessed whether AI systems could discriminate against protected characteristics (race, gender, disability, age, etc.)
- For hiring systems: particularly careful assessment for bias
- Documented that you monitor for discriminatory outcomes and have a process to address them if found
-
Sector-Specific Rules:
- If you’re in financial services: checked FCA AI guidance
- If you’re in employment: checked ACAS guidance on automated decision-making
- If you’re in e-commerce/consumer: checked CMA AI guidance
- If you’re in public sector: checked Cabinet Office AI guidance
Incident Response and Monitoring
Priority: MEDIUM — Ongoing
-
Established monitoring procedures for each high-risk system:
- Regular checks that the system is performing as intended
- Monitoring for accuracy degradation (has performance changed?)
- Monitoring for bias or discrimination in outputs
- Frequency documented and resource-assigned
-
Established incident response:
- If system performance degrades or produces discriminatory results, what happens?
- Who investigates?
- When is the system paused or stopped?
- How quickly can you respond?
- Is there a decision log of significant incidents?
Documentation and Recordkeeping
Priority: MEDIUM
-
Maintained a register of all AI systems in your business:
- System name, description, provider/vendor
- Risk classification (and reasoning)
- EU exposure (yes/no)
- Current compliance status
- Owner/responsible person
- Last review date
-
Documented key compliance records:
- Risk assessments
- Conformity assessment
- Technical documentation (for high-risk systems)
- Human oversight procedures
- Data governance procedures
- Incident logs
- Monitoring results
-
Organized records so they could be presented to a regulator if needed (not necessarily in perfect order, but findable)
Training and Awareness
Priority: LOW — Good practice but not strictly enforced
-
Ensured the person overseeing high-risk systems understands:
- What the system does and its limitations
- How to identify when it’s performing unexpectedly
- How to escalate and override
- What a serious incident looks like
-
Ensured employees using AI systems understand:
- What the system is (AI, not human judgment)
- What it can and can’t do reliably
- Any transparency obligations to customers (customers need to be told)
Action Plan
Once you’ve worked through the checklist, identify:
- High-priority gaps (items marked HIGH priority that are not checked)
- Medium-priority gaps (items marked MEDIUM priority that are not checked)
- Timeline (High priority items: must be done by August 2026 or December 2027. Medium/Low: ongoing)
- Owner (Who is responsible for closing each gap?)
- Resources (Do you need legal help, technical resources, or external support?)
What’s Next
If you want a structured assessment and action plan tailored to your specific AI systems and risk profile, Bartram AI screens your AI usage and delivers a prioritised roadmap. Or start with the full EU AI Act overview for regulatory context.
Cross-References
For high-risk system details and examples, see risk classification. For limited-risk transparency implementation, see chatbot compliance. For deadlines and timeline, see AI Act deadlines.