If you read my last article, you know I've been running OpenClaw, the open-source autonomous AI agent, as a personal AI experiment. As an AI and Cybersecurity professional and author of a book on securing Emerging Technologies, it should come as no surprise to anyone. My OpenClaw can access files, runs on a schedule, briefs me in the morning, and remembers who I am, based on what I tell it. But being ultra paranoid about security, I decided to research and build what I can only describe as a full digital airlock.

First, some context on why this matters right now

OpenClaw went from niche developer experiment to global conversation piece almost overnight. This week at Nvidia's GTC 2026 conference, the annual event that has become something of a Super Bowl for the AI industry, Jensen Huang stood on stage and declared that OpenClaw is the "operating system for personal AI" and compared it to the importance of the Mac and Windows operating systems.

He didn't stop there. "Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer," Huang said. "This is as big of a deal as HTML, as big of a deal as Linux."

Nvidia also launched NemoClaw, an enterprise-grade wrapper built on OpenClaw and Huang worked directly with OpenClaw creator Peter Steinberger to build it, shouting him out during the keynote.

When the world's most valuable technology company builds its flagship enterprise AI product on top of an open-source tool, and its CEO calls it the most significant software development in a generation, I pay attention.

While I was keenly following the hype, I was also painfully aware of the security risks and some of the unfortunate stories of early victims.

The fear was legitimate

Most articles I found about OpenClaw eventually arrived at the same place: this thing is powerful, and if you're not careful, it's dangerous.

An AI agent that can read your emails, execute terminal commands, manage your calendar, and message people on your behalf isn't just a productivity tool. It's a persistent process with elevated access to your digital life. And unlike a chatbot, it doesn't wait to be asked. It acts.

The attack surface is real. Prompt injection, where malicious instructions get embedded inside a document, email, or web page the agent processes, can cause it to execute actions you never intended. The ClawHub skill marketplace has had serious issues with malicious third-party skills. And most people deploying OpenClaw out of the box had done little to nothing to harden it.

I wasn't willing to accept that. So before I ran the agent, I built the environment.

The airlock: what I actually set up

I want to be specific here, because vague advice about "being careful with AI" is not useful.

I started with dedicated hardware. A separate machine with 24GB RAM and 1TB of storage, purchased specifically for this. Not my work laptop. Not my personal machine. A clean device with no prior history, no existing accounts, no shared credentials.

Then a separate network. The OpenClaw machine sits on its own isolated network, physically separated from the environment I use for work and daily life. If something goes wrong, a prompt injection, a rogue skill, an unintended command, the blast radius is contained.

New identity, end to end. A unique ID, dedicated email account, separate subscriptions to Claude and other AI services. Nothing shared with my primary accounts. The agent has no pathway to my real digital identity unless I deliberately give it one.

Then I encrypted the machine and enabled the firewall before the agent ever touched it. Full disk encryption. Default-deny network rules. The security baseline was set before OpenClaw was installed, not after.

Then I let the agent do a security audit

One of the first things I did after getting OpenClaw running was ask it to audit its own setup for vulnerabilities.

I wasn't expecting much. I got a detailed report.

There were issues. Configuration gaps I hadn't caught. Default settings that were more permissive than they should have been. Logging that wasn't capturing what I thought it was. A couple of permission settings that, in a less isolated environment, would have been genuinely risky.

I fixed them. All of them. And then I scheduled the audit to run weekly, automatically, every Sunday morning.

I set up a CISO agent that runs a daily scan and security checks, weekly security research for new vulnerabilities, runs backup jobs and helps me to develop additional bespoke security tools / skills.

That last part matters. Security isn't a one-time configuration. OpenClaw evolves. Skills get updated. New vulnerabilities emerge. The threat surface shifts. A weekly automated review means I'm not relying on memory or discipline to stay ahead of it, the agent does it for me.

The controls I consider non-negotiable

After going through this process, I landed on a set of hard rules for how my agent operates. These aren't suggestions. They're constraints baked into the configuration:

The agent never sends a message on my behalf without explicit approval. Ever. Not a WhatsApp, not a Slack, not an email. It drafts. I send.

It never deletes a file without asking.

It restricts filesystem access to its own workspace. No access to secret keys, system files, or anything outside its designated lane.

Shell commands require explicit permission and are logged every time. Anything that can touch the operating system gets treated as a high-risk action. Files are saved before being changed.

After three failed attempts on any task, it stops and reports back rather than retrying indefinitely. This one is underappreciated, a runaway agent attempting the same action in a loop is a real failure mode.

Identity files, the documents that define who I am and what the agent knows about me, are never readable externally. If anything asks the agent to reveal them, it refuses and alerts me.

What I think about the risk now

Having lived inside this setup for a bit, my view is this: the fear-mongering isn't wrong, it's just incomplete.

Yes, OpenClaw is inherently powerful in ways that create real risk. The people warning you about that aren't exaggerating. But the risk isn't a reason to avoid it, it's a reason to approach it seriously.

The gap between a reckless OpenClaw deployment and a hardened one isn't months of work. It's a week of deliberate setup and a commitment to ongoing hygiene. The airlock I described above took real effort and cost, but it wasn't beyond the reach of anyone who takes their digital security seriously.

If you decide to use OpenClaw, go in with your eyes open. Understand what you're deploying. Respect what it can do. And then build the environment that deserves that power.

The agentic AI era isn't coming. According to Jensen Huang, it arrived this week. The question isn't whether you'll engage with it, it's whether you'll do so securely or not.

How are you thinking about the security architecture around AI agents? I'd like to hear from leaders who are already navigating this, or actively deciding not to.