Private beta · For security leaders

The armor layer for the agents your team is already using.

Your engineers have AI agents running on their machines right now — reading source, calling shell commands, touching production credentials. Hauberk sits between the model and the host, enforcing the boundary your policies already assume is there.

Priority access for CISOs and security engineering No sales call required
hauberk ▸ session #4127 ▸ engineer@acme.local
SANDBOXED claude-opus-4-7
refactor the checkout handler and run the tests
agent Reading src/checkout/handler.rs
EDIT apply patch · src/checkout/handler.rs · +38 −22 ✓ APPROVED
@@ -142,7 +142,12 @@ fn process(order: Order) -> Result<Receipt> { - let total = order.items.iter().map(|i| i.price).sum(); + let total = order.items.iter() + .map(|i| i.price.checked_mul(i.qty as i64)) + .collect::<Option<Vec<_>>>() + .and_then(|v| v.iter().copied().sum::<Option<_>>()) + .ok_or(Error::Overflow)?;
BASH cargo test --package checkout ↳ AWAITING APPROVAL
Sandbox: project root only · Network: denied · Timeout: 120s
WEB fetch https://prod-db.internal/dump ✕ BLOCKED BY POLICY
Reason: host prod-db.internal not in allow-list · Logged to audit stream evt_9f2a1c

Your engineers didn't wait for a policy.
They're already shipping with agents.

The productivity is real. So are the failure modes — and most of them never show up on a SIEM until after the fact. Three stand out.

01 / SHELL

Unreviewed commands run as the engineer.

Agents that touch the shell inherit whatever the developer can do — including production credentials, SSH keys, and internal endpoints. One hallucinated command is one incident.

02 / EGRESS

Prompt injection turns context into exfiltration.

A poisoned README, a dependency's docs, a ticket comment — any untrusted text the agent reads can become a new instruction. The model doesn't know the difference. By default, neither does your network.

03 / AUDIT

You can't attest to what you can't see.

SaaS coding agents don't hand you a clean trail of what the model did, what it touched, or what data left the laptop. For regulated environments, "trust us" isn't an answer you're allowed to give.

A local runtime with the boundaries
your policy already assumes.

Sandbox

Every tool call runs inside a confined process.

Landlock on Linux, sandbox-exec on macOS, Job Objects on Windows. Filesystem access restricted to explicit roots with symlink-escape detection. Network egress governed by allow-lists. If Hauberk can't enforce the boundary, it won't launch — the footer shows BLOCKED and the session refuses to start.

SANDBOXED full confinement
DEGRADED fallback boundary, logged
UNSANDBOXED explicit opt-in only
BLOCKED no boundary; refuses to launch
Approval

Destructive operations stop at a human.

Every bash, edit, write, and fetch call surfaces for review before it runs. Approve, modify, or deny — with the full argument set visible. Configurable per-project policies let you auto-approve low-risk reads and force review on everything else.

BASH rm -rf ./build [A]pprove · [M]odify · [D]eny
Secrets

Credentials never reach the model.

API keys live in a hardened in-process vault, redacted from anything sent to the provider. Environment scans strip common secret patterns before prompts are assembled. When the agent needs to act on a secret, it calls a named capability — not the value.

env → AWS_ACCESS_KEY_ID=«vault: aws/prod-rw»
Audit

A cryptographically journaled trail of every session.

Prompts, tool calls, approvals, denials, file diffs, and network attempts — streamed to your SIEM over syslog, HEC, or OTLP. Hash-chained entries that survive crashes, with replay support for incident review. Finally, a paper trail for what the agent did.

15:42:08 WEB denied · prod-db.internal · evt_9f2a1c → splunk · hec
Injection

Untrusted content stays untrusted.

Files, web fetches, and tool output pass through an instruction-detection filter before the model sees them. Suspicious content is tagged, quarantined, or reformatted so the agent treats it as data — not a command.

QUARANTINED README.md · 3 imperative directives detected

Hauberk sits between the model
and everything it touches.

It runs on the developer's machine. Nothing proxies through our infrastructure. The model provider is whoever you already approve.

Hauberk runtime architecture A horizontal diagram showing three boxes. On the left, the Provider — your AI model service. On the right, the developer machine with files, shell, and network. In the middle, a larger box labelled "Hauberk · enforced boundary" wraps the connection between them and contains the agent session along with four control pills: sandbox, vault, audit, and approval. Tool calls cross the Hauberk boundary in both directions; access to the developer machine is enforced through that boundary. PROVIDER Claude · GPT · Gemini your existing contract HAUBERK · ENFORCED BOUNDARY AGENT SESSION sandbox vault audit approval DEVELOPER MACHINE Files · shell · network confined to policy TOOL CALLS ENFORCED
Provider calls cross into the Hauberk boundary. The agent session, and every reach from it to files, shell, or network on the developer machine, is gated by sandbox · vault · audit · approval.

Questions your legal and
security teams will ask.

Short answers. If your team needs the long version — threat model, architecture doc, license draft — ask for the evaluation package when you request access.

Is Hauberk open source?

Source-available under a time-delayed open-source license. The exact terms are being finalized with counsel, but the model we're targeting is standard in this space — you get full source to read, audit, fork for internal use, and run in production; the only restriction is that you can't resell Hauberk itself or offer it as a hosted service. After a defined period (measured in years, not months), each release converts automatically to a permissive OSI license.

If your legal team has prior familiarity with FSL (Sentry's Functional Source License) or BUSL (Business Source License, used by HashiCorp and MariaDB), you're in the right neighborhood.

Does any of my code, prompts, or data touch Hauberk's infrastructure?

No. Hauberk is a binary that runs on the developer's machine. Prompts go directly from that machine to whichever model provider you've approved — Anthropic, OpenAI, Google, or a private deployment. We don't proxy, relay, or observe the session. There is no "Hauberk cloud."

Telemetry is opt-in and limited to anonymous version-check pings. Disable it in config and the binary makes zero outbound calls of its own.

How is this different from Claude Code, Cursor, or Cody?

Those are agents. Hauberk is the boundary the agent runs inside. We're not trying to replace Claude Code — we're the thing that lets your security team sign off on developers using it.

An agent decides what to do. Hauberk decides what the agent is allowed to do: which files it can touch, which commands need approval, where egress can go, what gets logged to your SIEM, and what happens when it tries to step outside the lines.

What's the dependency license story? Any GPL surface?

No GPL or LGPL dependencies. The workspace is Rust, and we explicitly avoid the LGPL traps that trip up enterprise legal review — no bubblewrap, no libseccomp-rs. Sandboxing goes through kernel interfaces directly: Landlock on Linux, sandbox-exec on macOS, Job Objects on Windows.

cargo deny runs in CI with an allow-list policy; the full dependency SBOM is part of the evaluation package.

How does the audit trail work? Can we pipe it to our SIEM?

Every session emits a hash-chained event stream: prompts, tool calls, approvals, denials, file diffs, network attempts. Transports out of the box are syslog, Splunk HEC, and OTLP; a file sink is available if you want to ingest with your own agent.

Hash chaining means an attacker who tampers with historical entries invalidates the chain. The journal survives crashes — partial sessions are recoverable and clearly marked as such.

What happens when the sandbox can't enforce the boundary?

Hauberk fails closed. If the runtime can't establish the expected confinement — say, Landlock isn't available on an old kernel — the footer surfaces one of three states before anything runs:

SANDBOXED is the normal state: full confinement in effect. DEGRADED means a fallback boundary is active (e.g., seccomp-only on Linux) and is logged as a policy event. UNSANDBOXED requires explicit config to enable and is loud by design. BLOCKED means no acceptable boundary is available and Hauberk refuses to launch.

Your security policy controls which of those are acceptable in which contexts.

Who's behind this?

Small team, early-stage, building out of Nevada. We'll introduce ourselves properly when you request access — happy to take a call with your security leadership before any commitments. The founder's background and the technical lead contact both live in the evaluation package.

Pricing?

Per-seat, annual, with volume tiers for larger deployments. Free evaluation access during private beta. Detailed pricing is available once we've had an intro call — not because we're being coy, but because the right number depends on how many engineers, which features (audit sinks, SSO, managed policy), and whether you want on-call support.

Deployment
Self-hosted binary
zero cloud dependency
Data path
Prompts go to your model provider
Hauberk never sees them
Language
Rust · memory-safe
no runtime, no GC, no ffi surprises
Source
Source-available for full audit
no hidden agent in the loop

Stop assuming the boundary
is already there.

Private beta is rolling out to security-engineering teams now. Leave an email and we'll get in touch with evaluation access and the threat model.