Get started

AI Compliance for Chatbots, Automated Emails, and Customer Service Tools

guide 8 min read Updated 2026-03-23

AI Compliance for Chatbots, Automated Emails, and Customer Service Tools

Chatbots and customer service AI are classified as limited-risk under the EU AI Act. That’s good news and bad news. Good news: you don’t need the intensive compliance machinery required for high-risk systems (no conformity assessment, no technical documentation, no database registration). Bad news: you can’t ignore compliance entirely. The core obligation is transparency.

This article walks through what transparency means for the three most common limited-risk AI systems SMEs use: chatbots, AI-powered customer service tools, and automated emails. The requirement is simple. The implementation matters.

Why These Are Limited-Risk

Chatbots, virtual assistants, and systems that generate or process content for human interaction are classified as limited-risk because they interact with people and could influence their behaviour. The primary harm isn’t financial or physical — it’s trust and autonomy. If someone is interacting with an AI and doesn’t know it’s AI, they’re making decisions based on false assumptions about who or what they’re talking to. Transparency is the control that prevents that harm.

The compliance obligation is straightforward: you must disclose that the user is interacting with AI or viewing AI-generated content. The detail is in how you do it effectively.

Obligation 1: Chatbots and Virtual Assistants

The Requirement: Users must be clearly informed that they are interacting with an AI system, not a human.

What This Means: Your chatbot’s first message, or the interface itself, must disclose that it’s AI. Not buried in terms and conditions. Not in tiny text. Clear, obvious, in the language the user is reading.

Examples of compliant disclosures:

  • “You’re chatting with an AI assistant. I can help with FAQs, but I can’t process orders or handle confidential information.”
  • “This is an automated chatbot powered by AI. For urgent issues, contact [email protected].”
  • A clear visual indicator: “Powered by AI” or “AI Assistant” displayed prominently in the chat window.

Examples of non-compliant disclosures:

  • “We use artificial intelligence to improve your experience” — vague, not a clear disclosure that the user is talking to AI.
  • Terms and conditions that mention the chatbot is AI, if users don’t read them — buried disclosures don’t count.
  • “Chat with our team” when it’s a chatbot — misleading about whether it’s human.
  • A small icon indicating AI that users might miss if they scroll quickly.

How to Check It Works:

  1. Open your chatbot as a new user would.
  2. Before you type anything, does it clearly state “This is an AI chatbot”?
  3. If the disclosure is in a help section or settings, test whether a real user would find it.
  4. If the disclosure is visual (a badge, icon), test whether it’s visible on mobile and desktop.
  5. Does the chatbot then behave consistently with the disclosure? (If it says “I can’t process orders,” and someone tries to place an order, the chatbot should acknowledge its limitation.)

Obligation 2: AI-Powered Customer Service and Categorisation

The Requirement: If an AI system categorises, routes, or responds to customer communications, users must know this is happening.

What This Means: Many SaaS customer service tools now include AI that categorises support tickets, routes them to the right team, suggests responses, or auto-responds to common questions. If an AI is handling customer communication (even in part), the customer should know.

Concrete Scenario: A customer emails [email protected]. Your customer service platform uses AI to categorise the email as a billing question and route it to the billing team. The customer doesn’t see the AI layer — it’s internal. But the response might come from either an AI (auto-generated) or a human (reading the AI’s suggested response). The customer needs to know whether they’re talking to AI.

How to Implement:

  • If your system sends an automated response to the customer, that response should disclose it’s AI-generated. “Thank you for contacting us. This is an automated response powered by AI. A member of our team will review your message and respond within 24 hours.”
  • If your system categorises and routes tickets internally but a human responds, the human’s response doesn’t need to disclose the AI routing layer (the customer doesn’t see it). But if an AI-generated response is sent to the customer, it does.
  • If your system suggests responses to support staff, the staff member can review and send as their own without AI disclosure (they’re not showing AI-generated content to the customer). But if the suggested response is sent verbatim without human review, it needs disclosure.

How to Check It Works:

  1. Set yourself up as a test customer and submit a support request.
  2. For every response you receive, determine: is this AI-generated or human-written?
  3. If it’s AI-generated, is that disclosed clearly?
  4. If it’s human-written but informed by AI suggestions, check whether the human modified it or sent it verbatim. If verbatim, it may need disclosure.

Obligation 3: AI-Generated Content

The Requirement: Content generated by AI and published to customers must be disclosed as AI-generated.

What This Means: If you use AI to write product descriptions, email copy, blog posts, or marketing content, and you publish that content publicly or send it to customers, the audience should know it’s AI-generated.

Concrete Scenario: You use Claude to generate product descriptions for your e-commerce site. You review them, make minor edits, and publish them. The descriptions need to disclose they were AI-generated.

Examples of Compliant Approaches:

  • “This product description was written with AI assistance” — disclosed at the top of each product page.
  • A footer note: “Content on this page includes AI-generated text.”
  • A disclaimer in your privacy policy that states “We use AI to generate marketing and product content” plus a visible label on the content itself.
  • Email copy with a disclosure: “This email was written with AI assistance and reviewed by our team.”

Examples of Non-Compliant Approaches:

  • Publishing AI-generated content with no disclosure that it’s AI.
  • Disclosing AI use in a privacy policy but not on the specific content.
  • Using AI internally to draft content that’s then heavily edited by humans without disclosing the AI involvement. (Grey area: if AI is used only to help structure or brainstorm, but the human writes all final content, disclosure may not be required. If AI generates substantial text, disclosure is required.)

How to Check It Works:

  1. For each piece of content you publish, ask: was AI used to generate or significantly contribute to this?
  2. If yes, is there a clear, visible disclosure that it’s AI-generated?
  3. If the content is personalized or sent individually (emails), does the disclosure appear in a prominent location (subject line, opening line, or clear visual indicator)?

Obligation 4: Content Manipulation and Deepfakes

The Requirement: If you alter, filter, or synthetically generate images, video, or audio, users must be told.

What This Means: Deepfakes, AI-generated images, or heavily AI-processed video must be disclosed as such.

Concrete Scenario: You use AI to touch up product photos (remove backgrounds, enhance lighting) or generate lifestyle images for marketing. These need to be disclosed as AI-generated or AI-processed.

How to Implement:

  • Product photos processed by AI: “Image processed with AI enhancement” or “AI-edited image.”
  • Synthetic images: “AI-generated image” or “This image was created with AI.”
  • Video with synthetic elements: Disclosure in the video description or as an on-screen label.

Rare for most SMEs: Most SMEs aren’t generating deepfakes or doing sophisticated video synthesis. But if you’re using AI to process images for product photos or marketing visuals, disclose it.

Implementation Checklist

For each limited-risk AI system, work through this checklist:

  • Chatbots: First interaction clearly states “This is an AI chatbot”?
  • Chatbot disclosure visible: On desktop and mobile?
  • Chatbot consistency: Behaviour matches the disclosed capability?
  • Customer service: AI-generated responses disclose they’re AI?
  • Email responses: Auto-responses state they’re AI-generated?
  • Product descriptions: Published descriptions disclose AI generation?
  • Marketing copy: Email and web copy disclose AI involvement?
  • Images: AI-generated or heavily processed images are labelled?
  • Test as a customer: Go through the disclosure from a user’s perspective?

Common Failure Modes

Disclosure hidden in terms and conditions. Users don’t read these. Disclosure must be in the immediate context of the interaction.

Vague language. “We use AI to improve your experience” isn’t a disclosure that you’re interacting with AI. Be explicit: “You’re chatting with an AI chatbot.”

No visual test. Write the disclosure, then test it with someone unfamiliar with your system. Can they immediately understand they’re talking to AI?

Inconsistent behaviour. Your chatbot says it can’t handle orders, but the next message suggests it can. Disclosure must match reality.

Assuming provider compliance equals deployer compliance. Your chatbot platform provider might have transparency requirements. But you, the deployer, are responsible for ensuring the disclosure is actually implemented on your site.

What’s Next

  1. Audit your current systems. For each limited-risk system (chatbot, customer service tool, AI-generated content), check whether transparency is actually implemented.
  2. Implement missing disclosures. Add clear, visible language disclosing that users are interacting with AI or viewing AI-generated content.
  3. Test from the user perspective. Go through the experience as a real customer would. Does the disclosure appear before they interact with the AI?
  4. Document your implementation. Record where and how you disclose AI use. This is useful if a regulator asks.

Limited-risk compliance is genuinely achievable. The barrier isn’t complexity; it’s awareness. Many businesses simply don’t realise they need to disclose. Once you do, implementation is straightforward.


Cross-References

For the full risk classification framework, see how to classify your AI systems. For what high-risk AI (like hiring) requires, see the full EU AI Act overview. For a complete compliance checklist, see the checklist.

Free newsletter

Get insights like this fortnightly

UK compliance rules are changing fast. Our newsletter covers what changed, what's coming, and what it means for your business.

Subscribe →

Free, fortnightly, no spam. Unsubscribe any time.

Want to check your compliance?

Find out where you stand — and get a prioritised action plan.

Screen your AI compliance →