🌟 Vasilij’s Note
This week I went deep on OpenClaw—the fastest-growing AI agent on GitHub that also happens to be one of the most dangerous tools you can put near client data right now. A CVSS 8.8 remote code execution vulnerability. A skills marketplace where 12% of listings are malicious. Twenty-one thousand instances are sitting exposed on the open internet with no authentication. Belgium's national cybersecurity centre issued a formal warning. And yet the install count keeps climbing. This isn't a fringe problem. If you run a consultancy with developers or technically curious staff, assume someone has already tried it. The broader pattern I keep seeing: the tools that generate the most excitement are often the ones with the least governance. The speed of adoption is outpacing the security review—and that gap is where client data gets compromised.

In Today's Edition:

This Week in Agents | What Changed

  • Anthropic raises $30 billion Series G at a $380 billion valuation – Run-rate revenue is now $14 billion, growing 10x annually. Eight of the Fortune 10 are Claude customers. → For consultancies evaluating AI vendors, Anthropic's financial runway now makes it a credible long-term partner. Anthropic

  • Anthropic accuses three Chinese AI labs of systematic model extraction – DeepSeek, Moonshot AI, and MiniMax used 24,000 fraudulent accounts to generate 16 million interactions with Claude. → IP protection is now an operational concern, not just a geopolitical one. Ask your AI vendors how they detect and respond to extraction attacks. TechCrunch

  • OpenAI partners with BCG, McKinsey, Accenture and Capgemini on Frontier – Multi-year "Frontier Alliances" with embedded OpenAI engineers inside client delivery teams. → Mid-market consultancies face a narrowing window to build internal AI capability before the gap becomes a commercial disadvantage. OpenAi

Top Moves - Signal → Impact

  • 1.5 million AI agents operating without governance A Gravitee study of 750 IT executives found over three million AI agents now operating inside US and UK organisations—with 47% actively unmonitored. The mean number of agents deployed per business is 36.9. One in two is ungoverned. 88% of firms have already experienced or suspected an AI agent-related security or privacy incident in the past year.

    → Shadow AI is not a future risk. It is a current operational reality. The most common entry point is individual developers installing tools like OpenClaw without IT approval. Run a network scan for exposed ports (18789, 18793 for OpenClaw specifically) and check for agent directories on BYOD devices. If you find them, you have a policy gap, not just a security gap.

  • Anthropic Claude Code now scans codebases for security vulnerabilities Claude Code's latest capability identifies weaknesses in production codebases and suggests targeted patches for human review. The announcement contributed to a software stock selloff, with Thomson Reuters dropping 16% and LegalZoom falling 20%.

    → AI is beginning to replace the audit layer, not just the production layer. For consultancies doing any code-adjacent delivery—technical due diligence, system reviews, DevOps work—this changes the cost structure of what you can offer clients. The firms that integrate this first will reprice their services; those that don't will be repriced by competition.

  • Cloud Security Alliance flags governance gaps in autonomous agents A new CSA report finds that enterprises deploying AI agents are relying on static credentials, inconsistent access controls, and undefined accountability structures. Runtime guardrails are rare. Audit logging is inconsistent. Only 18% of security leaders are highly confident their IAM systems can manage agent identities effectively.

    → If you're deploying any agents in client environments, your governance documentation needs to include: credential rotation policies, audit log configuration, and a defined accountability chain for every action the agent can take. This is no longer optional in regulated industries.

Upskilling Spotlight | Learn This Week

Cloud Security Alliance: Securing Autonomous AI Agents Report

Outcome: Practical framework for identity management, access controls, and audit requirements for production agent deployments. Written for practitioners, not theorists. Essential reading before deploying any agent in a client environment. CSA

Anthropic: Detecting and Preventing Distillation Attacks

Outcome: Understand how large-scale model extraction attacks work, what the tell-tale patterns look like in API traffic, and what vendor-level protections you should be asking about when evaluating AI platforms for client deployments. Anthropic

Maker Note | What I built this week

This week, I researched online and documented seven security failures in OpenClaw for a full breakdown video.

Decision: Don't touch it for client work. The vulnerability surface isn't a list of bugs waiting to be patched — it's architectural. No authentication on the gateway. No review process on the marketplace. Credentials stored in plaintext. Prompt injection with no systemic defence. Each issue alone would give me pause. All ten together make it a liability, not a tool. If your developers want the capability, there are enterprise-grade alternatives that don't require you to audit 341 malicious marketplace extensions before you get started.

This is OpenClaw. Here are 10 security failures that should stop any consultancy from putting it anywhere near client data.

Operator’s Picks | Tools To Try

Shodan

  • Use for scanning your network perimeter for exposed agent ports before your security team—or an attacker—does it for you.

  • Standout: searches for default OpenClaw ports (18789, 18793) take under two minutes.

  • Caveat: requires an account for detailed results.

Gravitee AI Agent Management

  • Use for centralising visibility and governance across AI agent API calls.

  • Standout: designed for agent-to-agent traffic, not just human-to-API interactions.

  • Caveat: overkill for firms under 50 staff; start with a simple agent registry first.

ModelOp

  • Use for AI governance portfolio management—tracking which agents are deployed, what data they access, and whether they're meeting compliance requirements.

  • Caveat: enterprise-focused pricing. Evaluate against simpler alternatives for firms under 100 staff.

Deep Dive | Thesis & Playbook

OpenClaw went from zero to the most-starred AI agent on GitHub in under three months. The appeal is real — an open-source framework that gives your machine eyes, hands, and a terminal. Browse the web, write and execute code, manage files, send messages. Powerful by design.

But power without security is a liability. And right now, OpenClaw has one of the worst security profiles of any widely adopted developer tool in recent memory.

On Paper

  • OpenClaw positions itself as the flexible, community-driven alternative to closed agent platforms. Install it, connect it to your preferred model provider, and it handles complex multi-step workflows autonomously. The ClawHub marketplace extends capability further — a catalogue of published skills that anyone can install to extend what the agent can do.

  • Three interfaces to choose from. Multi-provider model support. An active contributor community. 95,000 GitHub stars in under three months. On paper, it's exactly what technically curious developers want to experiment with.

In Practice

  • On 30 January 2026, researcher Mav Levin disclosed CVE-2026-25253, rated CVSS 8.8 out of 10. One click on a malicious webpage and an attacker gets full remote control of your OpenClaw instance — arbitrary code execution, configuration changes, complete gateway compromise. The root cause is architectural: OpenClaw's control UI trusts the gateway URL without validation and doesn't check WebSocket origins. Cross-site WebSocket hijacking takes milliseconds. Even localhost-only setups are exploitable. Belgium's Centre for Cybersecurity issued a national advisory telling citizens to stop using it.

  • The ClawHub marketplace compounds the problem significantly. Koi Security audited it and found 341 malicious skills — 12% of the entire registry. Snyk found 283 skills actively leaking credentials: API keys, OAuth tokens, SSH credentials, credit card numbers. Bitdefender puts the malicious share at 17%. One single user account published 199 fake skills. The barrier to publishing? A GitHub account one week old. No code review. No verification. Skills install cleanly, look legitimate, and exfiltrate silently.

  • Censys identified 21,639 OpenClaw instances exposed to the open internet as of 31 January — default ports, zero authentication, API keys and full chat histories readable by anyone who looks. Pillar Security found that sophisticated attackers aren't even trying to manipulate the AI layer — they're connecting directly to the WebSocket gateway and issuing raw commands, bypassing the model entirely.

  • Prompt injection adds a third attack vector. OpenClaw reads content from web pages, emails, and documents and interprets it as instruction. Malicious content hidden in a client document, a web page visited during research, or an email processed by the agent becomes a command the agent executes. There is no architectural defence for this at scale. Simon Willison's "lethal trifecta" — private data access, untrusted content ingestion, and external communication — describes OpenClaw exactly.

  • Beyond the security failures, the operational risk is significant. A developer set up a simple heartbeat check — asking OpenClaw what time it was every 30 minutes — and burned $20 overnight due to uncontrolled token usage to Claude Opus. Projected monthly cost for that one task: $750. No usage alerts. No budget limits. No cost guardrails. The companion platform Moltbook exposed its entire database publicly, including secret API keys. Scammers grabbed abandoned social handles within seconds of OpenClaw's rebranding from Clawdbot and Moltbot, launching a fake CLAWD token on Solana that pumped to $16 million before crashing 90%.

Issues / Backlash

  • Three high-impact security advisories in three consecutive days. Trail of Bits CEO Dan Guido called the codebase — maintained by 350 contributors plus AI agents writing code — "building a house without an architect." Google security founding member Heather Adkins said publicly: don't run it. Cisco called the exposed instance problem "an absolute nightmare."

  • Gartner predicts 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025. But only 6% of organisations have an advanced AI security strategy. OpenClaw's install count keeps climbing into that gap.

My Take (What to do)

The core risk for consultancies isn't that your organisation deploys OpenClaw officially. It's that someone already has, on a personal laptop, connected to your network, with access to client data.

Startup (15–40 staff):

Run a Shodan scan for ports 18789 and 18793 on your perimeter today. Search company machines for the OpenClaw directory. If you find it, don't punish — understand the use case, then redirect to an enterprise-grade alternative with proper controls. Write a one-page tool approval checklist covering: what data the tool accesses, where that data is sent, who approved it, and what the rollback procedure is. Apply it before any agent touches client data.

SMB (50–120 staff):

Shadow AI is almost certainly present at your scale. Assign one ops team member as AI tool owner — their job is to maintain a registry of what's deployed and under what conditions. Establish a basic incident response procedure for agent-related events. Update your BYOD policy to explicitly include agent frameworks. If you have clients in regulated industries, audit which tools have access to their data before your next contract renewal.

Enterprise (150–250 staff):

Run a full agent inventory now, not after an incident. Treat agent identities with the same rigour as human user identities: separate credentials, least-privilege permissions, session recording, periodic access reviews. Require security review — prompt injection testing, tool permission scoping, data exfiltration attempts — before any agent framework reaches production. Brief your incident response team on agent-specific threat vectors. Enterprise-grade alternatives exist with vendor SLAs, DLP integration, and audit logging. They are less exciting than 95,000 GitHub stars. They will not expose your client data to the open internet.

How to Try (15-minute path)

This week's experiment is a shadow AI audit, not a tool deployment.

  1. Search your network for exposed OpenClaw ports using Shodan. Query for ports 18789 and 18793. Note any results. (3 min)

  2. Search company-managed machines for the OpenClaw directory (~/.openclaw or equivalent). If you use an MDM solution, query for it. (5 min)

  3. Review your current AI tool approval process. If nothing is documented, write one sentence: "Any AI agent that accesses company or client data requires approval from [name] before installation." (5 min)

  4. Send it to your team. (2 min)

Success metric: You either find nothing (good), find something and now know about it (useful), or discover you have no policy (actionable). All three outcomes are better than the alternative.

Spotlight Tool | Gravitee

Purpose: Centralised governance for AI agents — visibility, access control, and audit logging across every agent interaction in your organisation.

Edge: Treats AI agents with the same discipline as APIs and event streams. Identity, access policies, and trust enforcement in a single framework. Built for teams that have already deployed agents and now need to govern them.

  • → Unified registry across all deployed agents and their permissions

  • → Runtime access controls and policy enforcement per agent identity

  • → Audit logging of every agent action, data access, and external call

  • → Agent-to-agent traffic monitoring via Gravitee Agent Mesh

  • → Open-source core with enterprise governance layer

The missing piece for most consultancies isn't more agent capability — it's knowing what agents are running, what they can access, and what they've done. Gravitee makes that visible.

Try it: Gravitee

What did you think of today's email?

Let me know below

Login or Subscribe to participate

n8n – An open‑source automation platform that lets you chain tools like DeepSeek, OpenAI, Gemini and your existing SaaS into real business workflows without paying per step. Ideal as the backbone for your first serious AI automations. Try: n8n

Did you find it useful? Or have questions? Please drop me a note. I respond to all emails. Simply reply to the newsletter or write to [email protected]

Referral - Share & Win

AiGentic Lab Insights

Keep Reading