For the past few weeks, we have been running OpenClaw in our infrastructure. It is not just a writing assistant; it actively works across our systems: committing code, managing repositories, interacting with APIs. The effort to run self-hosted software sovereignly has dropped dramatically. Despite full schedules, we can offer a broad software catalogue and help others achieve the same efficiency gains. That kind of autonomy comes with a tradeoff.
Modern agents no longer just follow instructions; they are no longer about automating individual steps. They figure out how to reach a goal, take action, and validate their own results. Useful, but also a risk: an agent running with your user permissions can do anything you can do.
So the question we kept coming back to was: How do we ensure nothing goes wrong despite high autonomy? How do we keep humans in control?
Isolation Is a Good Start, but Not Enough
The standard answer is isolation: VMs, containers, network segmentation, tight permissions. This significantly limits the blast radius and is solid advice we follow regardless.
But in practice, agents need credentials. They need to authenticate against Git repositories, APIs, external services. The usual approach is an access token in an environment variable or config file.
The problem: OpenClaw runs with the permissions of the host user. Anything readable by the user is readable by the agent. And anything readable by the agent can end up somewhere else, not maliciously, but through prompt injection, a compromised model, or simply because the token appears in context and gets processed. Whoever holds a token holds the keys to every service it was issued for.
Isolation reduces the blast radius, but a residual risk remains. As long as the token is within the agent’s reach, it can be exposed. Isolation limits what can happen. It does not stop the agent from seeing the token.
The Fix: Keep the Token Out of Reach
We approached this structurally. Instead of hoping the agent handles credentials responsibly, we made sure it never encounters them. The agent does not communicate with the Git repository directly. It goes through a local proxy running as a sidecar alongside OpenClaw.
From the agent’s point of view, the repository is just a local address. The real hostname and authentication details are never exposed. The proxy injects the auth header into every request automatically.
The token never travels through OpenClaw. It cannot be leaked by a compromised session or extracted via prompt injection. It is simply not there to be found.
How It Works
We run OpenClaw in a Docker Compose setup, which allows us to run Caddy as a sidecar directly alongside it. OpenBao generates the Caddy config at runtime and writes it to a temporary shared volume, accessible only to OpenBao and Caddy, not to OpenClaw. The token stays in the secret store and only appears as an injected HTTP header inside Caddy.
template {
contents = <<EOT
{
admin off
auto_https off
}
http://localhost:9999 {
reverse_proxy https://your.git.remote.server {
header_up Authorization "Basic "
header_up Host your.git.remote.server
}
}
EOT
destination = "/proxy/Caddyfile"
perms = "0600"
command = "caddy reload --config /proxy/Caddyfile 2>/dev/null || true"
}
Architecture Over Policy
The key distinction here is not about trust. We are not asking OpenClaw to behave well with credentials it has access to. We designed the system so it never has access in the first place. That is the difference between policy and architecture.