\u2190 All insights

EU AI Act compliance for browser-deployed tools: what April 2026 actually requires

The EU AI Act is now enforceable. For browser-deployed AI tools, three specific obligations matter more than the others. Here's what auditors are actually asking for.

As of April 2026, the EU AI Act is enforceable in whole. For the browser-deployed category of AI tools, the ones your staff are actually using, three specific obligations matter more than the others. These are what auditors are asking for in the inspections we're seeing.

1. Risk classification at the endpoint, not the vendor

A common misconception is that the Act classifies AI systems by vendor or by model. It doesn't. It classifies them by the use case and the context of deployment. The same LLM API can be a minimal-risk chatbot in a marketing workflow and a high-risk system in a CV-screening process.

This matters because when an auditor arrives, they will not ask you what LLMs you use. They will ask what you use them for. The ownership of classification sits with you, the deployer, not with the model provider.

2. Demonstrable data-governance controls

Article 10 obligations around training data don't apply to most organisations that merely consume LLMs. But Article 14 (human oversight) and Article 15 (accuracy, robustness, cybersecurity) do, if any of your uses touch high-risk categories under Annex III.

For browser-deployed AI, this translates to:

  • A documented inventory of what data categories are permitted to leave your perimeter via prompt
  • Real-time controls that prevent categories that are not permitted
  • Audit logs that survive legal hold and cannot be repudiated by the user

The second bullet is where most organisations are failing an inspection today. Detective controls (we noticed after the fact) are not sufficient under Article 15, preventive controls are required for high-risk use cases.

3. Incident reporting within 72 hours

Article 73 requires notification of serious incidents to the relevant national competent authority within 72 hours. For browser-based AI leakage, a 'serious incident' includes unauthorised exfiltration of personal data or IP via a generative-AI tool.

This is where the audit trail you chose not to build in 2024 becomes an existential problem in 2026. You cannot report an incident within 72 hours if you don't know it happened, and you cannot make a coherent disclosure if your evidence is incomplete.

What compliance actually looks like

The inspections I've observed so far ask for four artefacts:

  1. Your AI use-case inventory, mapped to Annex III risk categories
  2. The technical controls that enforce your data-category policies at the browser boundary
  3. A sampling of audit events demonstrating the controls are actively used (usually last 30 days)
  4. Your incident-response playbook specific to AI tools, including who the competent authority is for your organisation

Note that two of the four are browser-level operational evidence, not policy documents. Inspectors have stopped accepting paperwork without operational proof. If your controls exist only on a SharePoint site, you are not compliant.

The uncomfortable truth

Most enterprises we brief are expecting their first genuine inspection within six months. The ones that are prepared have put preventive controls at the browser layer. The ones that aren't are hoping their first inspection is a paperwork check.

Hope is not a control.