How to Classify Your AI Systems by Risk Tier
Everything hinges on risk tier. If your AI system is high-risk, you face intensive compliance requirements. Limited-risk requires transparency. Minimal-risk requires nothing specific. Unacceptable-risk is prohibited. Get your classification right and the rest of compliance follows. Get it wrong and you’re either over-preparing or under-preparing for risk you haven’t understood.
The EU AI Act defines four tiers based on the potential harm the system could cause. This is not a technical assessment of how good your AI is. It’s a regulatory assessment of the contexts in which the system is used and the impact it could have on people affected by it.
Step 1: Understand Your System
Before classifying, inventory what you have. By “AI system,” the Act means software that processes data, learns from patterns, and produces recommendations or decisions. This includes:
- Obvious AI: Chatbots, content generators (ChatGPT, Claude), image generators (Midjourney, DALL-E)
- Embedded AI: AI screening in applicant tracking systems, recommendation engines in e-commerce platforms, credit scoring in loan management software, dynamic pricing engines
- API-based AI: Your own application integrating OpenAI, Anthropic, Google, or similar providers via API
- In-house AI: Custom models or fine-tuned models you’ve built or deployed
For each system, answer:
- What does it do? (Describe its function in one sentence)
- What data does it use? (Customer data, employee data, financial data, public data?)
- Who is affected by it? (Customers, employees, job candidates, consumers?)
- What does it decide or recommend? (Hire/don’t hire, approve/reject credit, show/don’t show content?)
- Could it cause significant harm if it fails? (Lost job opportunity, financial loss, discrimination, loss of access to a service?)
Step 2: Check for Unacceptable Risk
Before moving to the other tiers, rule out unacceptable-risk uses. These are prohibited outright.
Social scoring systems. AI that ranks people’s trustworthiness, reliability, or social standing based on behaviour, personal characteristics, or reputation. This is banned. If you’re using AI to create a “customer risk score” based on online behaviour, you’re in prohibited territory.
Manipulation of vulnerable groups. AI specifically designed to manipulate people who are vulnerable due to age, disability, or other factors. Banned.
Real-time biometric identification in public spaces. Using AI to identify people from facial recognition, gait, or similar biometrics in real-time in public spaces. Banned (with law enforcement exceptions that don’t apply to businesses).
If you’re in any of these categories, you can’t proceed to a different tier. You must stop using the system or change it fundamentally.
Assuming you’re not in prohibited territory, move to the other three tiers.
Step 3: Assess for High Risk
High-risk systems are those deployed in areas where AI failures could cause significant harm to individuals. The Act lists specific high-risk contexts:
Employment and HR:
- AI used to screen CVs or job applications
- AI used to rank candidates
- AI used to analyse interview performance
- AI used to monitor or evaluate employee performance
- AI used to decide promotions or dismissals
Financial decisions:
- Credit scoring (loan applications, credit limits)
- Insurance underwriting (premium calculation, claims assessment)
- Insurance broker recommendations
Education and training:
- AI used to score essays or exams
- AI used to recommend course placement or specialisation
- AI used to assess student potential or “dropout risk”
Law enforcement and justice:
- AI used to predict offender risk or reoffending likelihood
- AI used to assign police resources based on crime prediction
- Facial identification used by law enforcement
Critical infrastructure:
- AI used to operate power grids, water systems, transport networks
Immigration and asylum:
- AI used to assess asylum applications or determine eligibility
Benefits and social security:
- AI used to determine welfare eligibility or benefit levels
Examples:
- You use an ATS (Applicant Tracking System) with built-in AI that screens résumés and ranks candidates. High-risk.
- You use an AI hiring tool that analyses video interviews. High-risk.
- You use an AI system to monitor employee productivity or performance. High-risk.
- You use an AI chatbot on your website to answer customer questions. Not high-risk (it’s limited-risk).
- You use AI to personalise product recommendations on your e-commerce site. Likely minimal-risk unless the personalisation is so opaque it feels manipulative — then it could be limited-risk.
If your system falls into any high-risk category, stop here. It’s high-risk. Move to Step 5 to understand what that means.
If it doesn’t, continue to limited-risk.
Step 4: Assess for Limited Risk
Limited-risk systems are those that interact with people or generate content, where transparency is the key control.
Systems that interact with people:
- Chatbots and virtual assistants
- Recommendation engines visible to end-users
- Emotion recognition systems
- Biometric categorisation (age estimation, gender classification from images)
Systems that generate or manipulate content:
- AI-generated text (articles, summaries, emails)
- AI-generated images or art
- Deepfakes
- AI that alters or filters images or video
Examples:
- You have a chatbot on your website answering customer support questions. Limited-risk. Visitors must know they’re talking to AI.
- You use AI to generate product descriptions for your e-commerce site. Limited-risk. The descriptions must be labelled as AI-generated.
- You use AI to create marketing copy for emails. Limited-risk if it’s clearly labelled, minimal-risk if it’s internal and not shown to customers.
- You use an AI-powered CRM that recommends which leads to contact next. Depends on visibility. If customers see recommendations, limited-risk. If it’s internal and just helps your sales team, minimal-risk.
If your system is limited-risk, the main compliance obligation is transparency. You must disclose to users that they’re interacting with AI or viewing AI-generated content.
If it’s not high-risk or limited-risk, it’s minimal-risk.
Step 5: Minimal Risk
Everything else. Internal analytics, spam detection, recommendation engines not visible to customers, AI that supports human decision-making without making decisions itself. No specific EU AI Act obligations for minimal-risk systems.
Note: “Minimal” doesn’t mean “no compliance.” UK GDPR applies to any system processing personal data. Sector-specific rules apply. But the intensive high-risk compliance burden doesn’t apply.
What Each Tier Requires
High-Risk Compliance (Intensive)
- Conformity assessment: Document that your system meets the Act’s requirements
- Quality management system: Documented processes for development, deployment, monitoring, incident response
- Human oversight: Designated responsible person(s) with authority to override or stop the system
- Risk assessment: Identify potential harms and mitigation strategies
- Data governance: Document data provenance, curation, testing, validation; prevent bias
- Technical documentation: System description, purpose, technical specs, data, limitations, accuracy metrics, testing, human oversight
- Incident reporting: Report serious incidents to relevant authorities
- EU AI database registration: Register the system before deployment
Time-intensive. Most SMEs using high-risk AI are not compliant with these requirements.
Limited-Risk Compliance (Moderate)
- Transparency disclosure: Tell users they’re interacting with AI or viewing AI-generated content
- Clear, simple language: The disclosure must be easily understood, not in fine print
- Testing: Ensure the transparency mechanism actually works and is visible to users
Much simpler than high-risk. The main failure mode is pretending the disclosure isn’t necessary.
Minimal-Risk Compliance (Baseline)
- Follow existing UK GDPR and sector-specific rules
- No specific EU AI Act obligations
Worked Examples
Example 1: Recruitment Agency Using AI ATS
You use a SaaS applicant tracking system that includes AI-powered CV screening. The system reviews applications, scores candidates based on qualifications and keywords, and ranks them for you.
Classification: High-risk. AI is used to screen candidates and rank them in employment decisions. This is explicitly listed as high-risk.
Requirements: Conformity assessment, documentation, human oversight (you must review the AI’s shortlist, not just accept it), risk assessment (what if the AI filters out qualified candidates with non-traditional backgrounds?), data governance, incident reporting.
Timeline: Full compliance required by August 2026 (or December 2027 under Omnibus extension).
Example 2: E-Commerce Company with Recommendation Engine
Your website uses AI to recommend products based on customer browsing history. The recommendations appear as “customers like you also bought” on product pages.
Classification: Limited-risk. AI generates recommendations visible to users. Users must know it’s AI.
Requirements: Disclose that recommendations are AI-generated. “Recommended for you based on AI analysis” or similar, clear language.
Timeline: Transparency should be in place now if you’re serving EU customers.
Example 3: Marketing Agency Using ChatGPT
You use ChatGPT internally to draft email copy for client campaigns. The copy is reviewed and edited by a human before sending.
Classification: Minimal-risk. AI generates text, but it’s not published as AI-generated (it’s reviewed and incorporated into human-authored campaigns). It’s internal tooling, not a customer-facing service.
Requirements: No specific EU AI Act obligations. General GDPR applies (ensure you’re not feeding customer data into ChatGPT without a proper data processing agreement).
Timeline: Minimal.
Example 4: Financial Services Company Using Credit Scoring AI
You use an AI system to score credit applications and make loan approval decisions.
Classification: High-risk. Credit scoring is explicitly listed as high-risk.
Requirements: Full high-risk compliance. This is particularly stringent for credit systems because the Act requires explainability (applicants must be able to understand why they were approved or rejected) and human oversight (a person must review the AI’s decision before approving or denying credit).
Timeline: August 2026 (or December 2027).
Example 5: SaaS Company with Emotion Recognition
Your platform includes an emotion detection feature that analyses customer facial expressions during video calls to gauge engagement.
Classification: Limited-risk. Emotion recognition is explicitly limited-risk.
Requirements: Transparency. Users must know the system is analysing their emotions.
Timeline: Implement transparency now.
How to Document Your Classification
For each AI system, create a brief record:
System Name: [e.g., ATS Screening Engine]
Function: [e.g., Ranks job applicants based on CV analysis]
Risk Tier: [Unacceptable / High / Limited / Minimal]
Reasoning: [e.g., "Used in employment decisions; AI Act §2 classifies CV screening as high-risk"]
Affected Individuals: [e.g., Job candidates within the EU]
EU Market Exposure: [Yes / No]
Current Compliance Status: [Not compliant / Partially compliant / Compliant]
Deadline: [August 2026 or December 2027 if Omnibus adopted]
Next Steps: [e.g., Begin conformity assessment, document intended purpose]
Keep this record updated. If you add a new AI system, classify it immediately.
What’s Next
Once you’ve classified your systems, the path forward is clear. High-risk systems require intensive work by August 2026. Limited-risk systems need transparency mechanisms in place now. Minimal-risk systems require standard compliance attention.
If you want to understand your full AI compliance exposure — not just classification but also implementation priorities, estimated effort, and a roadmap to August 2026 — Bartram AI screens your AI usage and delivers a prioritised action plan. Or start with the checklist to measure where you stand.
Cross-References
For transparency requirements for limited-risk systems, see chatbot compliance. For high-risk systems in hiring, see employment compliance. For data governance implications, see GDPR compliance.