Secure runtime for
AI agents
Sandboxes with PII redaction, prompt injection defense, and network controls built in. Everything your agent needs to run safely — one SDK, no duct tape.
Everything you get from a sandbox provider — plus a security pipeline you'd otherwise build yourself.
~50ms
sandbox startup
10,000+
sandboxes created
Trusted by engineers from
The problem
The AI agent stack is broken
Today you need a sandbox vendor, a guardrails vendor, DIY network controls, and DIY persistence. Four tools, gaps at every seam, nothing shares context.
Today
3-5 vendors · no shared context · gaps at every seam
With Declaw
One SDK · shared execution context · everything integrated
Sandbox + Security
Everything your agent needs to run safely
Sandboxes, filesystem, networking, guardrails, and audit — all sharing execution context inside the same Firecracker VM.
Firecracker Sandboxes
Every execution runs in an isolated microVM. ~50ms startup, configurable CPU/memory/disk, full Linux environment. Drop-in replacement for your current sandbox.
sbx = Sandbox.create(template="base", timeout=60)
result = sbx.commands.run("python3 agent.py")
print(result.stdout)Persistent Filesystem
Read, write, watch files inside the sandbox. State persists across sessions so agents pick up where they left off.
sbx.files.write("/workspace/data.csv", csv_content)
files = sbx.files.list("/workspace")
content = sbx.files.read("/workspace/results.json")Network-Layer Controls
L3/L4 kernel-level IP filtering + L7 domain/SNI inspection + TLS interception. Control exactly what your agent can reach.
from declaw import SecurityPolicy, NetworkPolicy
policy = SecurityPolicy(
network=NetworkPolicy(
allow_out=["*.openai.com", "pypi.org"],
deny_out=["0.0.0.0/0"],
)
)Guardrails Suite
PII redaction with rehydration, prompt injection defense, code security analysis, toxicity scanning, and invisible text detection — all running at the proxy layer.
policy = SecurityPolicy(
pii=PIIConfig(enabled=True),
injection_defense=InjectionDefenseConfig(enabled=True),
code_security=CodeSecurityConfig(enabled=True),
toxicity=ToxicityConfig(enabled=True),
invisible_text=InvisibleTextConfig(enabled=True),
)Full Audit Trail
Every intercepted request, redaction event, and injection block is logged. Configurable retention, exportable.
policy = SecurityPolicy(
audit=AuditConfig(
enabled=True,
log_request_body=True,
retention_hours=24,
)
)
# Audit logs accessible via console/APIDetection engine
What gets detected and blocked
PII is redacted at the boundary. Injections, toxic content, and invisible text are caught. Unsafe code is blocked before execution.
Social Security
Credit Card
Email Address
Phone Number
API Key
Prompt Injection
Toxic Content
Invisible Text
Code Security
Inside every sandbox
6-stage security pipeline
Every sandbox runs this pipeline on all outbound traffic. It's built into the VM — transparent to your agent, no code changes needed.
Network Policy
L3/L4 iptables rules — IP/CIDR allow/deny
Domain Filter
L7 SNI + Host header inspection, wildcards
TLS Interception
Per-sandbox CA, decrypt request/response body
Guardrails
PII redaction, prompt injection, code security, toxicity, invisible text — redact/block/log
Transformation Engine
Regex match/replace rules on body
Audit Logger
Metadata + events logged, configurable retention
Early access
Join the waitlist
We're onboarding teams one at a time. Tell us what you're building and we'll get you set up.
Support