← Back to blog
2026-03-20 · dcode · security, architecture, nemoclaw

The Security Model: Deny Everything, Allow Explicitly

How Klawty's 3-layer security stack prevents AI agents from accessing the network, filesystem, and shell by default — and why every agent framework should do this.

The problem with AI agents

An AI agent with shell access can run rm -rf /. An agent with network access can exfiltrate your database to a remote server. An agent with filesystem write access can overwrite your SSH keys.

Most agent frameworks ship with all of this enabled by default. CrewAI, LangGraph, AutoGen — they give agents unrestricted tool access and trust the LLM to behave. That works in demos. In production, it's a liability.

We learned this the hard way. Within the first week of our production deployment, an agent tried to delete a log directory, forwarded a client email to the wrong recipient, and burned $47 on a health check loop that escalated to the most expensive model tier.

The fix wasn't better prompts. It was better architecture.

Layer 1: Docker exec sandbox

Every shell command an agent executes runs inside an isolated Docker container:

- No network access — the container has no network interface - Read-only root filesystem — agents can't modify system files - Memory limited — prevents resource exhaustion - Time limited — commands that run too long get killed - No privilege escalation — no sudo, no setuid

The agent thinks it's running ls workspace/data/. It is — but inside a container that can't see or touch anything outside the mounted workspace directory.

Layer 2: Deny-by-default policy engine

The klawty-policy.yaml file is the security manifest. Everything is denied unless explicitly allowed:

network:
  default: deny
  allow:
    - "api.openrouter.ai"
    - "discord.com"
    - "api.telegram.org"
    - "gmail.googleapis.com"

filesystem: writable: - "workspace/" - "data/" readonly: - "skills/" - "config/" blocked: - "/etc" - "~/.ssh" - "~/.gnupg" - ".env"

exec: blocked: - "rm -rf" - "sudo" - "curl|bash" - "chmod 777" - "mkfs" - "dd if="

resources: max_memory_mb: 512 max_cpu_percent: 50 max_file_size_mb: 10

This isn't a suggestion — it's enforced at the runtime level. If an agent's LLM output includes a network call to an unlisted domain, it's blocked before the request leaves the machine. If a tool tries to write outside workspace/ or data/, the write fails with a policy violation.

Layer 3: Runtime safety

The policy engine handles infrastructure. The runtime handles business logic:

PII detection — Before any text reaches the LLM, a local scanner checks for email addresses, phone numbers, credit card numbers, and IBANs. Detected PII gets routed to a local model (never leaves the machine) or redacted before sending to the cloud.

Credential monitor — Every 6 hours, the system validates all API keys. Expired key? Alert. Low balance on a provider? Alert. Key revoked? Alert and graceful degradation.

SHA-256 runtime integrity — On every boot, Klawty computes SHA-256 hashes of all runtime modules and compares them to a manifest. If any file has been tampered with, the system refuses to start.

5-tier autonomy model — Every tool has a risk classification:

| Tier | Behavior | Example | |------|----------|---------| | AUTO | Execute silently | Read a file, search the web | | AUTO+ | Execute and notify | Update a tracker, draft an email | | PROPOSE | Execute with 15-min rollback | Send an email, deploy to staging | | CONFIRM | Wait for human approval | Production deploy, credential rotation | | BLOCK | Always refused | Financial transfers, legal filings |

Proposal system — PROPOSE and CONFIRM actions enter a 6-state lifecycle. A dedicated safety agent validates against 9 business rules before any execution. The human gets a Discord reaction or web portal button to approve, reject, or roll back within 15 minutes.

Why this matters

Every week, there's a new story about an AI agent doing something its creators didn't intend. The common response is "add guardrails" — usually meaning prompt instructions like "don't delete files."

Prompt-level guardrails are suggestions. Policy-level enforcement is architecture. The LLM doesn't decide what it's allowed to do. The security layer does.

Klawty ships with the full security stack in the free version. Deny-by-default policy engine, Docker sandbox, PII detection, runtime integrity, credential monitoring. Because security shouldn't be a premium feature.

curl -fsSL https://klawty.ai/install.sh | bash
klawty onboard