Back to blog
·14 min read

I Installed OpenClaw. 2 Hours Later, an Attacker Had Full Access to My Machine.

How a single npm install led to an undetected breach — and the forensic trail that exposed it.

SecurityDevOpsAI Agents

On a Monday morning in late January 2026, I installed OpenClaw (formerly Clawdbot) on my MacBook. By that evening, an attacker had injected a remote access trojan into our codebase and established a persistent backdoor on my machine, it then created a worm that spread to other developers on the team. The malware went undetected for days. We only found it by accident.

This is the full technical breakdown of what happened, how we traced it, and what every developer running AI agents should know.

It Started With a Build Error

We weren't doing a security audit. We were debugging a routine React Native build failure — a module registration error that had nothing to do with security. While digging through config files looking for the cause, something caught our eye.

The closing line of a Babel config file looked normal at first glance:

};

But the line was unusually long in the raw file. Scrolling right — way right, past hundreds of invisible spaces — revealed a wall of obfuscated JavaScript:

};                              global['!']='8-270-2';var _$_1e42=(function(l,e){...

Someone had appended a massive block of encrypted code to the end of the last line of a config file, padded with whitespace to push it completely off-screen. It was invisible in every code editor, every diff tool, every code review. You'd only see it if you happened to scroll horizontally past column 300.

We checked our other repos. The same payload was hiding in tailwind.config.js in one project and postcss.config.mjs in another. Three repositories, three different config files, same invisible malware.

What the Code Actually Did

It took several hours to fully deobfuscate the payload. It was layered:

Layer 1 was a string decoder — a character-shuffling cipher that reconstructed function names from a scrambled key. Layer 2 fed those decoded strings into JavaScript's Function() constructor, which is just eval() wearing a trenchcoat. Layer 3 was the real payload.

It was a remote access trojan. A good one.

The RAT

The malware established a WebSocket-based remote shell with full filesystem access. Once connected, the attacker could issue any of these commands:

CommandWhat it does
upload:Exfiltrate files from the victim's machine
download:Drop files onto the victim's machine
eval:Execute arbitrary JavaScript
evalb64:Execute base64-encoded JavaScript
ws:Change the C2 WebSocket URL on the fly
killStop the RAT
Any other textExecute as a shell command and return output

The RAT could read any file on the system — .env files, SSH keys, git credentials, anything. It could write files too, which is how it re-injected itself into config files across repos. It ran via child_process.spawn with windowsHide: true and stdio: "ignore" — completely silent, no terminal output, no visible process window.

Blockchain C2

Normally malware calls home to a server. Take down the server, the malware dies. This one was different.

Instead of hardcoding a C2 server URL, the malware pulled its instructions from the blockchain. Here's the exact chain:

  1. Malware reads the latest transaction from a Tron wallet (TMfKQEd7TJJa5xNZJZ2Lep838vrzrs7mAP)
  2. The Tron transaction data contains a BSC (Binance Smart Chain) transaction hash — stored reversed
  3. Malware fetches that BSC transaction's input data via public RPC endpoints
  4. XOR-decrypts it with hardcoded key 2[gWfGj;<:-93Z^C
  5. The result is another obfuscated loader that repeats the process with a second Tron wallet (TLmj13VL4p6NQ7jpxz8d9uYY6FUKCYatSe) and a different XOR key
  6. The final payload is the RAT itself

All BSC transactions originate from the same attacker wallet: 0x9bc1355344b54dedf3e44296916ed15653844509

Three payloads are still on-chain as of this writing:

DateBSC BlockPayload Size
Dec 1, 202570,128,8617,410 chars
Oct 1, 202563,120,6897,526 chars
Oct 1, 202563,118,6817,432 chars

You can't delete a blockchain transaction. You can't trace who posted it. The attacker can update the C2 server anytime by posting a new transaction. Three chains for redundancy — if one goes down, it falls back to the others. They turned the blockchain into an anonymous, indestructible remote control.

The C2 infrastructure is still live. Anyone still infected is still calling home to these wallets right now.

Execution

The payload executed automatically whenever anyone ran npm run dev, npx expo start, or next dev — any command that loaded the config file. But it was smart about it. The malware throttled itself to only execute once every 30 minutes, so rapid restarts of your dev server wouldn't spawn duplicate connections or generate suspicious network traffic. One quiet phone-home every half hour. Just enough to maintain access. Not enough to notice.

The Ghost in the Git Log

The injection itself was sophisticated, but the anti-forensics were even more impressive.

Alongside each infected config file, the attacker dropped a Windows batch script called temp_auto_push.bat. Here's what it did:

  1. Read the last legitimate commit's timestamp, author name, and email
  2. Changed the system clock to match that timestamp
  3. Impersonated the previous author via git config --local
  4. Ran git commit --amend --no-verify
  5. Force-pushed to the remote

The result: the malware injection was hidden inside what appeared to be a completely normal commit, by a known developer, at an expected time. Running git log showed nothing unusual. Running git blame pointed at a teammate who had no idea. It was the git equivalent of breaking into someone's house and editing their security camera footage.

Pulling the Thread

We knew we'd been hit, but we needed to understand how. This is where it got interesting.

The Timezone That Didn't Belong

Every git commit stores two timestamps: the author date (when the code was “written”) and the committer date (when the commit object was actually created). The attacker's script spoofed the clock to fake these dates, but there's one thing you can't easily fake on a system: the timezone offset.

We compared the original commits to the malware commits:

CommitOriginal TimezoneMalware Timezone
Repo A (our account)-0500 (EST)-0800 (PST)
Repo B (our account)-0500 (EST)-0800 (PST)
Repo C (contractor)+0300 (EAT)-0800 (PST)

Every malware commit carried a committer timezone of -0800. None of our developers are in Pacific time. The attacker's infrastructure had tagged every single commit with their own timezone. A perfect fingerprint hidden in plain sight.

Forensic takeaway: When investigating suspicious commits, always check git log --format="%ai %ci". If the author timezone and committer timezone don't match — or the committer timezone is unexpected — you may be looking at a tampered commit.

The Reflog Doesn't Forget

Git's reflog recorded every state change to origin/main. When we checked:

a]4f7c2 origin/main@{Jan 27 22:47}: update by push     ← our clean commit
e83b1d9 origin/main@{Feb 05 12:31}: fetch: forced-update ← it's been replaced

Our clean commit was on the remote on January 27. Sometime before February 5, it had been force-replaced with the malware version. The reflog entry forced-update is git's quiet way of saying “someone rewrote history on the remote.”

The Worm

Then we noticed something we didn't expect: the malware in our third repo was pushed from a different developer's GitHub account. Not ours. A contractor who worked on that project.

But the committer timezone on their malware commit was still -0800. Not +0300, which was their actual timezone. Their machine had been compromised too.

The infection chain was clear:

  1. Our machine was compromised first
  2. Malware was pushed to shared repos using our credentials
  3. A contractor pulled the infected code and ran a build
  4. The config file malware executed on their machine
  5. The malware on their machine injected code into other repos and pushed using their credentials

It was a worm. It spread developer-to-developer through the most mundane action in software engineering: pulling main and running your dev server.

The Root Cause

We traced the timeline from our shell history:

TimeEvent
10:44 AM, Jan 26npm i -g clawdbot
10:46 AM, Jan 26clawdbot onboard — provided API keys
8:25 PM, Jan 27Reloaded the clawdbot LaunchAgent
10:47 PM, Jan 27First malware commit appears — 2 hours later

OpenClaw (then called Clawdbot) had a critical RCE vulnerability — CVE-2026-25253 (CVSS 8.8) — that allows 1-click remote code execution via authentication token exfiltration through an unsanitized gatewayUrl parameter. On top of that, nearly 900 malicious “skills” (~20% of the ClawHub registry) were discovered distributing Atomic macOS Stealer (AMOS) as part of the “ClawHavoc” campaign.

Once the agent was compromised, the attacker had everything: an interactive shell on our machine, access to every file, every git repo, every credential. They didn't need to phish us or exploit a zero-day in our code. We ran npm install and gave them the keys ourselves.

The Damage

With a remote shell that had been active for days, we had to assume anything stored on the development machine was compromised — API keys, SSH keys, environment variables, the works.

We killed the RAT process, verified no persistence mechanisms survived, and began the long process of rotating every credential that had ever touched that machine.

The Cleanup

Immediate response

  • Killed the active RAT process
  • Cleaned all infected config files across all repositories
  • Deleted the attacker's batch scripts
  • Uninstalled the compromised AI agent
  • Scanned for persistence in LaunchAgents, LaunchDaemons, crontab, and shell profiles
  • Alerted team members to check their machines
  • Credential rotation
  • Full system audit confirmed the malware was confined to config files only — no application source code was modified

Detection Guide

If you run OpenClaw/Clawdbot or any AI coding agent with system-level access, here's how to check if you've been hit by a similar attack:

Check your config files:

# Search for the specific malware signature we encountered
grep -r "wuqktamceigynzbosdctpusocrjhrflovnxrt" ~/dev 2>/dev/null
grep -r "global['!']" ~/dev 2>/dev/null

# Look for suspiciously long lines in config files
awk 'length > 500' ~/dev/**/babel.config.* ~/dev/**/tailwind.config.* \
    ~/dev/**/postcss.config.* ~/dev/**/webpack.config.* 2>/dev/null

# Find the attacker's batch script
find ~/dev -name "temp_auto_push.bat" 2>/dev/null

Check for active RAT processes:

ps aux | grep "node -e" | grep "global" | grep -v grep

Check for git history tampering:

# Look for committer timezone mismatches in recent commits
git log --format="%h %ai %ci %s" -20
# If author timezone and committer timezone differ, investigate that commit

Check for persistence (macOS):

ls ~/Library/LaunchAgents/
# Look for anything you don't recognize
crontab -l
# Check shell profile for unexpected additions
tail -20 ~/.zshrc ~/.bashrc ~/.bash_profile

Takeaway

We found this malware by accident. If we hadn't been debugging an unrelated build error that led us to manually inspect a config file, the RAT would still be running. Our git history looked clean. Our builds worked fine. Our tests passed. The malware was invisible to every normal development workflow.

OpenClaw hit 200,000 GitHub stars in weeks. Millions of installs. A skills marketplace with thousands of community contributions. The velocity is incredible and the tool is genuinely useful. But the security model is fundamentally broken. It runs with the same permissions as the developer, has access to every credential on the machine, and pulls skills from a community registry where 20% of submissions turned out to be malware.

We got lucky. The average dwell time for a sophisticated breach is measured in months.

Check your config files. Enable branch protection. And think very carefully before giving any tool — AI or otherwise — unrestricted access to your development environment.


If you've been affected by a similar attack, feel free to reach out. We're happy to share our detection scripts and remediation playbook.