<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://kuntschik.online/en/feed.xml" rel="self" type="application/atom+xml" /><link href="https://kuntschik.online/en/" rel="alternate" type="text/html" /><updated>2026-04-13T11:31:09+00:00</updated><id>https://kuntschik.online/feed.xml</id><title type="html">Philipp Kuntschik</title><entry xml:lang="en"><title type="html">Anthropic Claude Mythos: What Is Changing in Cybersecurity</title><link href="https://kuntschik.online/en/anthropic-mythos-cybersecurity" rel="alternate" type="text/html" title="Anthropic Claude Mythos: What Is Changing in Cybersecurity" /><published>2026-04-10T00:00:00+00:00</published><updated>2026-04-10T00:00:00+00:00</updated><id>https://kuntschik.online/anthropic-mythos-cybersecurity</id><content type="html" xml:base="https://kuntschik.online/anthropic-mythos-cybersecurity"><![CDATA[<p>Last week, a story made the rounds: Anthropic unveiled Claude Mythos, an AI model that can reportedly find and exploit security vulnerabilities in major operating systems and browsers on its own.
It is said to significantly outperform every publicly available model.
According to Bloomberg, the heads of systemically important US banks were summoned to an urgent meeting to discuss the potential implications.
Germany’s Federal Office for Information Security (BSI) expects, according to dpa, a “fundamental shift in how vulnerabilities are handled and in the vulnerability landscape as a whole”.</p>

<p>Anthropic has only made the model available to a small circle of technology companies so far. Independent assessments do not exist.
All claims come from Anthropic itself, and the media buzz carries unmistakable hype.
Whether Mythos is truly as powerful as claimed remains to be seen.</p>

<p>What holds true regardless of Mythos: AI systems are getting more capable by the day.
Mythos is not an outlier. It is another signal.
Behind it lies a structural shift that has been underway for some time.</p>

<h2 id="the-structural-shift">The Structural Shift</h2>

<p>The balance of power between offense and defense in cybersecurity has been shifting for years.
Attacks became cheaper, automation lowered the required expertise, while defenders scaled linearly with whatever budget they had.
OWASP calls this imbalance asymmetric warfare.</p>

<p>Offensive AI brings together three things that did not previously coexist: it adapts, it is efficient, and it scales almost without limit.
A human attacker can probe a system in a handful of ways at once.
An AI-powered system can approach the same target from a thousand angles simultaneously, at the same quality, without fatigue, without a learning curve.</p>

<p>The attack surface is not limited to AI systems themselves.
The entire existing digital infrastructure is in play: web applications, APIs, internal systems, supply chains.
The pace is accelerating: the window between a vulnerability being discovered and being exploited is shrinking toward zero.
What used to take weeks will soon happen in hours.</p>

<h2 id="good-architecture-helps-but-is-not-enough">Good Architecture Helps, but Is Not Enough</h2>

<p>Isolation, network segmentation, zero trust: for organizations, these concepts are nearly indispensable today.
They segment the attack surface, slow down lateral movement, and make attacks more expensive. Strongly recommended.</p>

<p>But ultimately, this alone does not protect. The higher the effort required, the higher the potential reward tends to be.
Anyone who believes they can rest on their architecture without constantly rethinking it underestimates how adaptive modern attackers have become.</p>

<h2 id="why-this-matters-now">Why This Matters Now</h2>

<p>What does this mean in practice for those responsible inside organizations?</p>

<p>Most of an enterprise IT landscape is not built in-house. It runs on third-party products.
Every external piece of software, every library, every SaaS integration extends the attack surface.
Application teams and product managers carry direct responsibility: ensuring that all vulnerabilities in deployed products are known and resolved before they are exploited is becoming a baseline requirement.
The same applies to the development process itself: code reviews, static analysis, and dependency scanning are no longer optional.</p>

<p>As the attack surface grows, so does the workload of those who monitor it.
For security teams, the focus in vulnerability management shifts fundamentally.
Systems need to be understood holistically; application owners should have a playbook for immediate compensating measures.
Vulnerabilities that individually carry low severity can be devastating in combination. And that combination is becoming far more likely through AI.
The question is no longer which gap to close first, but how fast it can be closed.
Automated detection provides the critical head start.
Those who spot anomalies early can respond before an attack escalates.
Those who do not will find themselves in a race they structurally cannot win.</p>

<p>Operational measures alone are not enough when the underlying risk model no longer holds.
CISOs and risk managers need to rethink their classical approach.
CVSS-based prioritization breaks down when the exploit probability for every known vulnerability approaches 1.
The range of possible incidents grows wider, estimates become less reliable, and residual risks that were previously deemed acceptable no longer are.
This needs to be communicated before it has to be explained in the middle of a crisis.</p>

<h2 id="act-before-others-do">Act Before Others Do</h2>

<p>Those who want to hold their own against AI-powered attackers need comparable tools on the defense side, and the competence to use them.
AI-driven attack simulation and automated anomaly detection are not a luxury.</p>

<p>Offensive AI systems are not a future scenario. They are here.
The only open question is who acts first: the attackers or us.</p>]]></content><author><name>Philipp Kuntschik</name></author><category term="tech" /><category term="security" /><summary type="html"><![CDATA[Anthropic's new AI model can autonomously find and exploit security vulnerabilities. What this means for enterprises and the industry.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://kuntschik.online/assets/favicon.svg" /><media:content medium="image" url="https://kuntschik.online/assets/favicon.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">No Token, No Leak: Limiting OpenClaw’s Visibility</title><link href="https://kuntschik.online/en/access-token-injection" rel="alternate" type="text/html" title="No Token, No Leak: Limiting OpenClaw’s Visibility" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>https://kuntschik.online/access-token-injection</id><content type="html" xml:base="https://kuntschik.online/access-token-injection"><![CDATA[<p>For the past few weeks, we have been running OpenClaw in our infrastructure.
It is not just a writing assistant; it actively works across our systems: committing code, managing repositories, interacting with APIs.
The effort to run self-hosted software sovereignly has dropped dramatically.
Despite full schedules, we can offer a broad software catalogue and help others achieve the same efficiency gains.
That kind of autonomy comes with a tradeoff.</p>

<p>Modern agents no longer just follow instructions; they are no longer about automating individual steps.
They figure out how to reach a goal, take action, and validate their own results.
Useful, but also a risk: an agent running with your user permissions can do anything you can do.</p>

<p>So the question we kept coming back to was:
How do we ensure nothing goes wrong despite high autonomy?
How do we keep humans in control?</p>

<h2 id="isolation-is-a-good-start-but-not-enough">Isolation Is a Good Start, but Not Enough</h2>

<p>The standard answer is isolation: VMs, containers, network segmentation, tight permissions.
This significantly limits the blast radius and is solid advice we follow regardless.</p>

<p>But in practice, agents need credentials.
They need to authenticate against Git repositories, APIs, external services.
The usual approach is an access token in an environment variable or config file.</p>

<p>The problem: OpenClaw runs with the permissions of the host user.
Anything readable by the user is readable by the agent.
And anything readable by the agent can end up somewhere else, not maliciously, but through prompt injection, a compromised model, or simply because the token appears in context and gets processed.
Whoever holds a token holds the keys to every service it was issued for.</p>

<p>Isolation reduces the blast radius, but a residual risk remains.
As long as the token is within the agent’s reach, it can be exposed.
Isolation limits what can happen. It does not stop the agent from seeing the token.</p>

<h2 id="the-fix-keep-the-token-out-of-reach">The Fix: Keep the Token Out of Reach</h2>

<p>We approached this structurally.
Instead of hoping the agent handles credentials responsibly, we made sure it never encounters them.
The agent does not communicate with the Git repository directly. It goes through a local proxy running as a sidecar alongside OpenClaw.</p>

<p><img src="/assets/token-proxy-en.svg" alt="Token-Proxy Architecture" /></p>

<p>From the agent’s point of view, the repository is just a local address.
The real hostname and authentication details are never exposed.
The proxy injects the auth header into every request automatically.</p>

<p>The token never travels through OpenClaw.
It cannot be leaked by a compromised session or extracted via prompt injection. It is simply not there to be found.</p>

<h2 id="how-it-works">How It Works</h2>

<p>We run OpenClaw in a Docker Compose setup, which allows us to run Caddy as a sidecar directly alongside it.
OpenBao generates the Caddy config at runtime and writes it to a temporary shared volume, accessible only to OpenBao and Caddy, not to OpenClaw.
The token stays in the secret store and only appears as an injected HTTP header inside Caddy.</p>

<div class="language-hcl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">template</span> <span class="p">{</span>
  <span class="nx">contents</span> <span class="o">=</span> <span class="o">&lt;&lt;</span><span class="nx">EOT</span>
<span class="p">{</span>
  <span class="nx">admin</span> <span class="nx">off</span>
  <span class="nx">auto_https</span> <span class="nx">off</span>
<span class="p">}</span>

<span class="nx">http</span><span class="o">:</span><span class="c1">//localhost:9999 {</span>
  <span class="nx">reverse_proxy</span> <span class="nx">https</span><span class="o">:</span><span class="c1">//your.git.remote.server {</span>
    <span class="nx">header_up</span> <span class="nx">Authorization</span> <span class="s2">"Basic "</span>
    <span class="nx">header_up</span> <span class="nx">Host</span> <span class="nx">your</span><span class="err">.</span><span class="nx">git</span><span class="p">.</span><span class="nx">remote</span><span class="p">.</span><span class="nx">server</span>
  <span class="p">}</span>
<span class="err">}</span>
<span class="nx">EOT</span>
  <span class="nx">destination</span> <span class="o">=</span> <span class="s2">"/proxy/Caddyfile"</span>
  <span class="nx">perms</span>       <span class="o">=</span> <span class="s2">"0600"</span>
  <span class="nx">command</span>     <span class="o">=</span> <span class="s2">"caddy reload --config /proxy/Caddyfile 2&gt;/dev/null || true"</span>
<span class="err">}</span>
</code></pre></div></div>

<h2 id="architecture-over-policy">Architecture Over Policy</h2>

<p>The key distinction here is not about trust.
We are not asking OpenClaw to behave well with credentials it has access to.
We designed the system so it never has access in the first place.
That is the difference between policy and architecture.</p>]]></content><author><name>Philipp Kuntschik</name></author><category term="tech" /><category term="security" /><category term="infrastructure" /><summary type="html"><![CDATA[How we systematically restrict autonomous AI agents from accessing secrets and tokens in our infrastructure.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://kuntschik.online/assets/favicon.svg" /><media:content medium="image" url="https://kuntschik.online/assets/favicon.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>