EU AI Act: What It Means for AI Agent Deployments
A practical guide for developers deploying autonomous AI agents in the EU — classification, obligations, fines, and how to prepare before the August 2026 enforcement deadline.
The deadline
August 2, 2026. That's when the EU AI Act becomes fully enforceable. Five months from now.
If you're deploying AI agents in the EU — or serving EU customers — this applies to you. The fines are not theoretical: up to 35 million euros or 7% of global annual turnover, whichever is higher.
Zero AI agent frameworks currently address this. That's a problem we're trying to fix.
How AI agents get classified
The EU AI Act uses a risk-based classification system. Most AI agents fall into two categories:
Limited risk — The majority of business automation agents. Email triage, content drafting, lead scoring, data analysis, scheduling. These require transparency obligations: you must tell users they're interacting with AI, and you must keep logs of what the system does.
High risk — Agents that make decisions affecting people's rights, employment, creditworthiness, or access to services. If your agent scores job applicants, evaluates loan eligibility, or makes legal recommendations, you're in high-risk territory. This triggers a full compliance regime: risk assessments, technical documentation, human oversight mechanisms, accuracy and robustness testing, and ongoing monitoring.
The classification depends on the use case, not the technology. The same agent framework can be limited risk for one customer and high risk for another.
What you need (at minimum)
Even for limited-risk deployments, the Act requires:
1. Transparency — Users must know they're interacting with AI. AI-generated content must be labeled. 2. Logging — Maintain records of what the system did, when, and why. Audit trails are mandatory. 3. Human oversight — For anything beyond trivial automation, humans must be able to intervene, override, or shut down the system. 4. Risk assessment — Document what can go wrong and how you mitigate it. 5. Technical documentation — Describe the system's capabilities, limitations, and intended use.
For high-risk applications, add: conformity assessments, bias testing, data governance, ongoing monitoring, and incident reporting.
How Klawty addresses this
We didn't build compliance features as an afterthought. Several of Klawty's core architectural decisions happen to align with what the EU AI Act requires:
Audit logging — Every agent action is logged with: who (which agent), what (tool called, parameters), when (timestamp), and result (success/failure/output). This is the activity log that powers the dashboard — and it's exactly what Article 12 requires for record-keeping.
Human oversight — The 5-tier autonomy model (AUTO, AUTO+, PROPOSE, CONFIRM, BLOCK) is a human-in-the-loop system by design. PROPOSE actions get a 15-minute rollback window. CONFIRM actions wait for explicit human approval. BLOCK actions are hardcoded refusals. This maps directly to Article 14's human oversight requirements.
Deny-by-default security — The klawty-policy.yaml policy engine restricts what agents can access: network, filesystem, shell commands. This is risk mitigation documented in code, not just in a PDF.
PII detection — The privacy router auto-detects email addresses, phone numbers, credit card numbers, and IBANs before they reach cloud LLMs. Sensitive data routes to local models or gets redacted. This supports GDPR compliance (which intersects with the AI Act) and demonstrates data governance.
Proposal system — When an agent creates a proposal, it records: the proposing agent, the action, the risk tier, the evidence, and the approval chain. This is traceable decision-making — exactly what regulators want to see.
Pre-classification with ARCA
ARCA is a compliance platform built by dcode technologies (the same team behind Klawty) specifically for the EU AI Act. Klawty deployments are pre-classified with ARCA, which means:
- Your agent system comes with a risk classification already documented - Technical documentation templates are pre-filled based on your configuration - The audit logging format matches ARCA's compliance dashboard - You get a head start on conformity assessment if your use case moves into high-risk territory
The full EU AI Act compliance pack is available at 1,500 euros plus 99 euros per month through arca.io.
Why this matters now
Five months until enforcement. Here's what most teams deploying AI agents haven't done:
- Classified their system under the Act's risk categories - Documented their agent's capabilities and limitations - Implemented audit logging that meets Article 12 standards - Built human oversight mechanisms beyond "we can turn it off" - Assessed the PII their agents process and how it's handled
If you're building on a framework that has no opinion on any of this — CrewAI, LangGraph, AutoGen, OpenAI Agents SDK — you're building the compliance layer yourself. From scratch. In five months.
Or you can start with a framework that already has the infrastructure.
curl -fsSL https://klawty.ai/install.sh | bash
klawty onboard
For the full compliance platform: arca.io