Updated on 2026-04-16 GMT+08:00

OpenClaw Risk Notice and Security Suggestions

OpenClaw (formerly known as Moltbot and clawdbot) is a continuously running AI agent that can invoke multiple large language models. Acting as a gateway, OpenClaw lets you interact with multiple AI models through messaging channels. This eliminates the need to switch between different platforms and streamlines the operation process.

The OpenClaw application image provided by FlexusL is installed using official open-source scripts. Before deploying the image, read and acknowledge the following risks first.

Table 1 Major risks

Category

Risk Description

Impact Analysis

Suggestions

Architecture defects: permissions out of control and network exposure

Permissions are not isolated. OpenClaw runs directly on a host with user permissions, instead of in an isolated environment such as a Docker container or VM. Once the agent is compromised, attackers can gain full control of the host.

OpenClaw purchased on the cloud is independent of the user's service system as a VM. If the VM is compromised, the host running OpenClaw will be affected.

Isolate OpenClaw from the existing service system using VPCs and subnets, or configure security groups and ACLs for refined access control to reduce risks.

The network gateway is exposed. The core component of OpenClaw is a web gateway, which listens to port 18789 by default. The exposed interfaces not only reveal server information, but also directly provide access paths to the control panel.

Port 18789 is disabled by default.

If you need to allow inbound access to the gateway, configure security groups and ACLs for refined access control to reduce risks.

Localhost is incorrectly trusted. In OpenClaw's early stage and some versions in January 2026, the system design assumed that connections from 127.0.0.1 were inherently trusted. However, users usually deploy reverse proxies to enable remote control. If the configuration is incorrect, all external requests appear to OpenClaw as originating from the local host, directly bypassing the authentication mechanism.

This risk does not exist if operations are performed according to the corresponding guide.

If you use a reverse proxy, pay attention to this security risk.

Data security: plaintext storage and cognitive context theft

Sensitive data is stored in plaintext. Forensic analysis shows that highly sensitive data, such as API keys and Slack/GitHub access tokens, is stored in plaintext in Markdown and JSON files in the local file system.

Key data stored in plaintext includes API keys of large models as well as access IDs and tokens for chat platforms. If VMs are compromised, the sensitive data will be exposed.

Before the official architecture is optimized, you are advised to:

1. Periodically change data such as API keys of large models.

2. Purchase Host Security Service (HSS) to implement in-depth protection and micro-isolation for workloads.

3. Purchase Cloud Firewall (CFW) to implement refined control over network traffic.

4. Use key management and automatic credential rotation of Data Encryption Workshop (DEW) to prevent passwords from being cracked.

Cognitive context is at risk of being stolen. The memory directory records the AI's long-term memory. It includes not only technical credentials, but also psychological profiling of users, work context, private conversation summaries, and interpersonal relationship networks. Stealer malware families, such as RedLine and Lumma, have already updated their target lists to specifically scan for and exfiltrate this type of directory.

Personal chat records are exposed.

Same as above

New threat forms are emerging. This is not just a password leak. It is a cognitive context theft. Attackers obtain not only passwords but also a complete digital profile of users. It can be used to launch targeted spear phishing attacks or exploit AI trust relationships to commit fraud.

Same as above

Same as above

Supply chain crisis: Security risks caused by name changes

-

N/A

N/A

Table 2 Attack vectors and suggestions

Attack Vector

Suggestions

Gateway exposure and privilege bypass attacks

See the suggestions for gateway exposure risks in the table above.

Supply chain poisoning and skill store attacks

Follow the security suggestions in the documentation.

  • Manual confirmation must be set for any sensitive operations (such as deleting files, sending emails, and transferring money).
  • Follow the principle of least privilege (PoLP) and grant AI only the permissions required to complete specific tasks. Do not grant all permissions at a time.

Indirect prompt injection attacks

Same as above

Lateral movement and pivoting attacks

See the suggestions for the lack of permission isolation in the table above.