🎯 New! Master certifications with Performance-Based Questions (PBQ) — realistic hands-on practice for CompTIA & Cisco exams!

The Ultimate Guide to OpenClaw: Architecture, Setup, and Use Cases (2026 Edition)

Published on January 30, 2026


Introduction

OpenClaw (formerly Moltbot, formerly Clawdbot) is an open-source, locally-run AI assistant designed to act as a proactive “lobster” (its mascot) on your computer. It integrates with chat apps, performs autonomous tasks, and maintains persistent memory for personalized interactions. Launched by developer Peter Steinberger, it has gained viral popularity in early 2026 for its privacy-focused, extensible design. This guide is based on validated information from official sources (openclaw.ai, GitHub repo), articles, and community discussions.

Update: The lobster has molted into its final form 🦞 Clawd → Moltbot → OpenClaw

100k+ GitHub stars. 2M visitors in a week. And finally, a name that’ll stick.

Your assistant. Your machine. Your rules.

OpenClaw emphasizes local execution for data privacy, supports multiple AI models, and can run on modest hardware. It’s MIT-licensed and has grown to 100k+ GitHub stars.

What is OpenClaw?

OpenClaw is a messaging-first AI agent that turns chat apps into a hub for automation. It runs as a background service, processes natural language commands, and executes actions like emailing, browsing, or scripting. Unlike passive chatbots (e.g., ChatGPT), it’s proactive—sending reminders or briefings without prompts. It uses a gateway to orchestrate tasks, supports multi-agent workflows, and can self-improve by generating skills.

Key Differentiators

  • Privacy-First: Data stays local; no cloud dependency unless using external models.
  • Extensibility: Skills marketplace (ClawHub) and self-written skills.
  • Cross-Platform: Works on macOS, Linux, Windows (via WSL2), iOS/Android nodes, and Cloudflare Workers.
  • Model Agnostic: Defaults to Anthropic models (e.g., Claude 3.5 Sonnet, Claude 3 Opus, or “Claude Code” variants). Supports Local LLMs (e.g., MiniMax M2.1) and recently added Ollama auto-discovery.

Use Cases & Real-World Examples

OpenClaw shines in scenarios requiring persistent context and autonomy.

1. Personal Productivity

  • Email Triage: Connects to Gmail to summarize inboxes, unsubscribe from spam, and draft replies.
  • Calendar: Manages schedules and sends traffic-based reminders.
  • Daily Briefings: Wakes you up with a summary of news, weather, and tasks.
  • Health: Fetches WHOOP/Oura data and generates guided meditations using TTS.

2. Development & Coding

  • Autonomous Loops: Can run “Claude Code” loops to write features, run tests, and fix bugs autonomously.
  • Pull Requests: Opens and reviews GitHub PRs.
  • Ops Management: Manages Hetzner servers, monitors CI/CD pipelines, and integrates with Sentry for error alerts.

3. Automation & Smart Home

  • Device Control: Controls Home Assistant devices (e.g., turning off Hue lights, adjusting Winix purifiers).
  • Web Tasks: Scrapes websites, monitors stock prices, and submits forms.
  • Integrations: Syncs with Notion, Todoist, Obsidian, Spotify, and X (Twitter).

OpenClaw on Cloudflare Workers

Run OpenClaw personal AI assistant in a Cloudflare Sandbox.

moltworker architecture

Experimental: This is a proof of concept demonstrating that OpenClaw can run in Cloudflare Sandbox. It is not officially supported and may break without notice. Use at your own risk.

Deploy to Cloudflare

Requirements

The following Cloudflare features used by this project have free tiers:

  • Cloudflare Access (authentication)
  • Browser Rendering (for browser navigation)
  • AI Gateway (optional, for API routing/analytics)
  • R2 Storage (optional, for persistence)

This project packages OpenClaw to run in a Cloudflare Sandbox container, providing a fully managed, always-on deployment without needing to self-host. Optional R2 storage enables persistence across container restarts.

Cloudflare Sandbox Architecture

moltworker architecture

Quick Start (Cloudflare)

Cloudflare Sandboxes are available on the Workers Paid plan.

# Install dependencies
npm install

# Set your API key (direct Anthropic access)
npx wrangler secret put ANTHROPIC_API_KEY

# Or use AI Gateway instead (see "Optional: Cloudflare AI Gateway" below)
# npx wrangler secret put AI_GATEWAY_API_KEY
# npx wrangler secret put AI_GATEWAY_BASE_URL

# Generate and set a gateway token (required for remote access)
# Save this token - you'll need it to access the Control UI
export MOLTBOT_GATEWAY_TOKEN=$(openssl rand -hex 32)
echo "Your gateway token: $MOLTBOT_GATEWAY_TOKEN"
echo "$MOLTBOT_GATEWAY_TOKEN" | npx wrangler secret put MOLTBOT_GATEWAY_TOKEN

# Deploy
npm run deploy

After deploying, open the Control UI with your token:

https://your-worker.workers.dev/?token=YOUR_GATEWAY_TOKEN

Replace your-worker with your actual worker subdomain and YOUR_GATEWAY_TOKEN with the token you generated above.

Note: The first request may take 1-2 minutes while the container starts.

Important: You will not be able to use the Control UI until you complete the following steps. You MUST:

  1. Set up Cloudflare Access to protect the admin UI
  2. Pair your device via the admin UI at /_admin/

You’ll also likely want to enable R2 storage so your paired devices and conversation history persist across container restarts (optional but recommended).

Setting Up the Admin UI

To use the admin UI at /_admin/ for device management, you need to:

  1. Enable Cloudflare Access on your worker
  2. Set the Access secrets so the worker can validate JWTs

1. Enable Cloudflare Access on workers.dev

The easiest way to protect your worker is using the built-in Cloudflare Access integration for workers.dev:

  1. Go to the Workers & Pages dashboard
  2. Select your Worker (e.g., moltbot-sandbox)
  3. In Settings, under Domains & Routes, in the workers.dev row, click the meatballs menu (...)
  4. Click Enable Cloudflare Access
  5. Click Manage Cloudflare Access to configure who can access:
    • Add your email address to the allow list
    • Or configure other identity providers (Google, GitHub, etc.)
  6. Copy the Application Audience (AUD) tag from the Access application settings. This will be your CF_ACCESS_AUD in Step 2 below

2. Set Access Secrets

After enabling Cloudflare Access, set the secrets so the worker can validate JWTs:

# Your Cloudflare Access team domain (e.g., "myteam.cloudflareaccess.com")
npx wrangler secret put CF_ACCESS_TEAM_DOMAIN

# The Application Audience (AUD) tag from your Access application that you copied in the step above
npx wrangler secret put CF_ACCESS_AUD

You can find your team domain in the Zero Trust Dashboard under Settings > Custom Pages (it’s the subdomain before .cloudflareaccess.com).

3. Redeploy

npm run deploy

Now visit /_admin/ and you’ll be prompted to authenticate via Cloudflare Access before accessing the admin UI.

Alternative: Manual Access Application

If you prefer more control, you can manually create an Access application:

  1. Go to Cloudflare Zero Trust Dashboard
  2. Navigate to Access > Applications
  3. Create a new Self-hosted application
  4. Set the application domain to your Worker URL (e.g., moltbot-sandbox.your-subdomain.workers.dev)
  5. Add paths to protect: /_admin/*, /api/*, /debug/*
  6. Configure your desired identity providers (e.g., email OTP, Google, GitHub)
  7. Copy the Application Audience (AUD) tag and set the secrets as shown above

Local Development

For local development, create a .dev.vars file with:

DEV_MODE=true               # Skip Cloudflare Access auth + bypass device pairing
DEBUG_ROUTES=true           # Enable /debug/* routes (optional)

Authentication

By default, OpenClaw uses device pairing for authentication. When a new device (browser, CLI, etc.) connects, it must be approved via the admin UI at /_admin/.

Device Pairing

  1. A device connects to the gateway
  2. The connection is held pending until approved
  3. An admin approves the device via /_admin/
  4. The device is now paired and can connect freely

This is the most secure option as it requires explicit approval for each device.

Gateway Token (Required)

A gateway token is required to access the Control UI when hosted remotely. Pass it as a query parameter:

https://your-worker.workers.dev/?token=YOUR_TOKEN
wss://your-worker.workers.dev/ws?token=YOUR_TOKEN

Note: Even with a valid token, new devices still require approval via the admin UI at /_admin/ (see Device Pairing above).

For local development only, set DEV_MODE=true in .dev.vars to skip Cloudflare Access authentication and enable allowInsecureAuth (bypasses device pairing entirely).

Persistent Storage (R2)

By default, OpenClaw data (configs, paired devices, conversation history) is lost when the container restarts. To enable persistent storage across sessions, configure R2:

1. Create R2 API Token

  1. Go to R2 > Overview in the Cloudflare Dashboard
  2. Click Manage R2 API Tokens
  3. Create a new token with Object Read & Write permissions
  4. Select the moltbot-data bucket (created automatically on first deploy)
  5. Copy the Access Key ID and Secret Access Key

2. Set Secrets

# R2 Access Key ID
npx wrangler secret put R2_ACCESS_KEY_ID

# R2 Secret Access Key
npx wrangler secret put R2_SECRET_ACCESS_KEY

# Your Cloudflare Account ID
npx wrangler secret put CF_ACCOUNT_ID

To find your Account ID: Go to the Cloudflare Dashboard, click the three dots menu next to your account name, and select “Copy Account ID”.

How It Works

R2 storage uses a backup/restore approach for simplicity:

On container startup:

  • If R2 is mounted and contains backup data, it’s restored to the OpenClaw config directory
  • OpenClaw uses its default paths (no special configuration needed)

During operation:

  • A cron job runs every 5 minutes to sync the OpenClaw config to R2
  • You can also trigger a manual backup from the admin UI at /_admin/

In the admin UI:

  • When R2 is configured, you’ll see “Last backup: [timestamp]”
  • Click “Backup Now” to trigger an immediate sync

Without R2 credentials, OpenClaw still works but uses ephemeral storage (data lost on container restart).

Container Lifecycle

By default, the sandbox container stays alive indefinitely (SANDBOX_SLEEP_AFTER=never). This is recommended because cold starts take 1-2 minutes.

To reduce costs for infrequently used deployments, you can configure the container to sleep after a period of inactivity:

npx wrangler secret put SANDBOX_SLEEP_AFTER
# Enter: 10m (or 1h, 30m, etc.)

When the container sleeps, the next request will trigger a cold start. If you have R2 storage configured, your paired devices and data will persist across restarts.

Admin UI

admin ui

Access the admin UI at /_admin/ to:

  • R2 Storage Status - Shows if R2 is configured, last backup time, and a “Backup Now” button
  • Restart Gateway - Kill and restart the OpenClaw gateway process
  • Device Pairing - View pending requests, approve devices individually or all at once, view paired devices

The admin UI requires Cloudflare Access authentication (or DEV_MODE=true for local development).

Debug Endpoints

Debug endpoints are available at /debug/* when enabled (requires DEBUG_ROUTES=true and Cloudflare Access):

  • GET /debug/processes - List all container processes
  • GET /debug/logs?id=<process_id> - Get logs for a specific process
  • GET /debug/version - Get container and OpenClaw version info

Optional: Chat Channels

Telegram

npx wrangler secret put TELEGRAM_BOT_TOKEN
npm run deploy

Discord

npx wrangler secret put DISCORD_BOT_TOKEN
npm run deploy

Slack

npx wrangler secret put SLACK_BOT_TOKEN
npx wrangler secret put SLACK_APP_TOKEN
npm run deploy

Optional: Browser Automation (CDP)

This worker includes a Chrome DevTools Protocol (CDP) shim that enables browser automation capabilities. This allows OpenClaw to control a headless browser for tasks like web scraping, screenshots, and automated testing.

Setup

  1. Set a shared secret for authentication:
npx wrangler secret put CDP_SECRET
# Enter a secure random string
  1. Set your worker’s public URL:
npx wrangler secret put WORKER_URL
# Enter: https://your-worker.workers.dev
  1. Redeploy:
npm run deploy

Endpoints

EndpointDescription
GET /cdp/json/versionBrowser version information
GET /cdp/json/listList available browser targets
GET /cdp/json/newCreate a new browser target
WS /cdp/devtools/browser/{id}WebSocket connection for CDP commands

All endpoints require the CDP_SECRET header for authentication.

Built-in Skills

The container includes pre-installed skills in /root/clawd/skills/:

cloudflare-browser

Browser automation via the CDP shim. Requires CDP_SECRET and WORKER_URL to be set (see Browser Automation above).

Scripts:

  • screenshot.js - Capture a screenshot of a URL
  • video.js - Create a video from multiple URLs
  • cdp-client.js - Reusable CDP client library

Usage:

# Screenshot
node /root/clawd/skills/cloudflare-browser/scripts/screenshot.js https://example.com output.png

# Video from multiple URLs
node /root/clawd/skills/cloudflare-browser/scripts/video.js "https://site1.com,https://site2.com" output.mp4 --scroll

See skills/cloudflare-browser/SKILL.md for full documentation.

Optional: Cloudflare AI Gateway

You can route API requests through Cloudflare AI Gateway for caching, rate limiting, analytics, and cost tracking. AI Gateway supports multiple providers — configure your preferred provider in the gateway and use these env vars:

Setup

  1. Create an AI Gateway in the AI Gateway section of the Cloudflare Dashboard.
  2. Add a provider (e.g., Anthropic) to your gateway
  3. Set the gateway secrets:

You’ll find the base URL on the Overview tab of your newly created gateway. At the bottom of the page, expand the Native API/SDK Examples section and select “Anthropic”.

# Your provider's API key (e.g., Anthropic API key)
npx wrangler secret put AI_GATEWAY_API_KEY

# Your AI Gateway endpoint URL
npx wrangler secret put AI_GATEWAY_BASE_URL
# Enter: https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/anthropic
  1. Redeploy:
npm run deploy

The AI_GATEWAY_* variables take precedence over ANTHROPIC_* if both are set.

All Secrets Reference

SecretRequiredDescription
AI_GATEWAY_API_KEYYes*API key for your AI Gateway provider (requires AI_GATEWAY_BASE_URL)
AI_GATEWAY_BASE_URLYes*AI Gateway endpoint URL (required when using AI_GATEWAY_API_KEY)
ANTHROPIC_API_KEYYes*Direct Anthropic API key (fallback if AI Gateway not configured)
ANTHROPIC_BASE_URLNoDirect Anthropic API base URL (fallback)
OPENAI_API_KEYNoOpenAI API key (alternative provider)
CF_ACCESS_TEAM_DOMAINYes*Cloudflare Access team domain (required for admin UI)
CF_ACCESS_AUDYes*Cloudflare Access application audience (required for admin UI)
MOLTBOT_GATEWAY_TOKENYesGateway token for authentication (pass via ?token= query param)
DEV_MODENoSet to true to skip CF Access auth + device pairing (local dev only)
DEBUG_ROUTESNoSet to true to enable /debug/* routes
SANDBOX_SLEEP_AFTERNoContainer sleep timeout: never (default) or duration like 10m, 1h
R2_ACCESS_KEY_IDNoR2 access key for persistent storage
R2_SECRET_ACCESS_KEYNoR2 secret key for persistent storage
CF_ACCOUNT_IDNoCloudflare account ID (required for R2 storage)
TELEGRAM_BOT_TOKENNoTelegram bot token
TELEGRAM_DM_POLICYNoTelegram DM policy: pairing (default) or open
DISCORD_BOT_TOKENNoDiscord bot token
DISCORD_DM_POLICYNoDiscord DM policy: pairing (default) or open
SLACK_BOT_TOKENNoSlack bot token
SLACK_APP_TOKENNoSlack app token
CDP_SECRETNoShared secret for CDP endpoint authentication (see Browser Automation)
WORKER_URLNoPublic URL of the worker (required for CDP)

Security Considerations

Authentication Layers

OpenClaw in Cloudflare Sandbox uses multiple authentication layers:

  1. Cloudflare Access - Protects admin routes (/_admin/, /api/*, /debug/*). Only authenticated users can manage devices.

  2. Gateway Token - Required to access the Control UI. Pass via ?token= query parameter. Keep this secret.

  3. Device Pairing - Each device (browser, CLI, chat platform DM) must be explicitly approved via the admin UI before it can interact with the assistant. This is the default “pairing” DM policy.

Troubleshooting (Cloudflare)

npm run dev fails with an Unauthorized error: You need to enable Cloudflare Containers in the Containers dashboard

Gateway fails to start: Check npx wrangler secret list and npx wrangler tail

Config changes not working: Edit the # Build cache bust: comment in Dockerfile and redeploy

Slow first request: Cold starts take 1-2 minutes. Subsequent requests are faster.

R2 not mounting: Check that all three R2 secrets are set (R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY, CF_ACCOUNT_ID). Note: R2 mounting only works in production, not with wrangler dev.

Access denied on admin routes: Ensure CF_ACCESS_TEAM_DOMAIN and CF_ACCESS_AUD are set, and that your Cloudflare Access application is configured correctly.

Devices not appearing in admin UI: Device list commands take 10-15 seconds due to WebSocket connection overhead. Wait and refresh.

WebSocket issues in local development: wrangler dev has known limitations with WebSocket proxying through the sandbox. HTTP requests work but WebSocket connections may fail. Deploy to Cloudflare for full functionality.

Traditional Setup (Docker & Local)

Architecture Deep Dive

OpenClaw’s modular design enables scalability and customization:

1. The Gateway

The central control plane (default port 18789). It is a WebSocket server that:

  • Manages sessions and presence.
  • Handles routing between channels and agents.
  • Runs cron jobs and webhooks.

2. Agents

Isolated runtimes that act as the “brains.”

  • Multi-Agent Support: You can spawn specialized agents (e.g., one for coding, one for scheduling).
  • Sandboxing: Non-main agents run in Docker sandboxes to prevent accidental system damage.
  • Capabilities: Can proxy subscriptions and edit their own prompts (hot-reload).

3. Channels

OpenClaw supports over 50 messaging platforms.

  • Major: WhatsApp (via Baileys), Telegram (via grammY), Slack (via Bolt), Discord.
  • Others: Signal, iMessage, Google Chat, Microsoft Teams, Matrix, Zalo, BlueBubbles, LINE.

4. Tools & Nodes

  • Browser Node: Controls a Chrome instance (CDP) for web automation.
  • System Node: Executes bash commands (system.run).
  • Media Pipeline: Handles image, audio, and video processing.
  • OS Integration: macOS companion app for screen recording, camera access, and location.

System Requirements (Self-Hosted)

  • Operating System:
    • macOS: 14+ recommended for the companion app.
    • Linux: Any modern distro (Ubuntu/Debian preferred). Ideal for VPS.
    • Windows: WSL2 is required. Native binaries are experimental and unstable.
  • Runtime: Node.js ≥22 (supports pnpm/bun in CI).
  • Hardware:
    • Minimum: Raspberry Pi 4/5 or Mac Mini (M1/M2).
    • Recommended for Local LLMs: PC with NVIDIA GPU (RTX 3090/4090) or Mac Studio.
  • RAM:
    • < 1GB for Gateway/API models.
    • 8GB-32GB+ for running Local LLMs (Ollama/MiniMax).

Setup Instructions

Setup is AI-assisted via an onboarding wizard.

Option A: macOS / Linux (The “Fast Path”)

The easiest way to get started. Installs NVM, Node.js, and OpenClaw.

curl -fsSL https://openclaw.ai/install.sh | bash

Option B: Windows (WSL2)

Critical: Do not use the native Windows commands unless you are a contributor debugging Windows support.

  1. Install WSL2: Open PowerShell as Administrator and run:

    wsl --install

    Restart your computer if prompted.

  2. Set up Ubuntu: Open the “Ubuntu” app, create a username/password.

  3. Install Dependencies & OpenClaw: Inside the Ubuntu terminal:

    # Install Node.js v22
    curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
    sudo apt-get install -y nodejs build-essential
    
    # Install OpenClaw
    sudo npm install -g openclaw@latest
    
    # Start Onboarding
    openclaw onboard

Option C: Developer Install (Git)

For those who want to hack on the codebase or use the latest features.

git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm ui:build  # Compiles the frontend assets
pnpm build     # Compiles the backend
pnpm openclaw onboard --install-daemon
pnpm gateway:watch  # Runs in dev mode with auto-reload

Docker Setup (Detailed)

Docker is the preferred method for running OpenClaw on a server (VPS) or for keeping your main system clean.

This approach sets up the Gateway and ensures data persistence.

  1. Create a docker-compose.yml file:

    version: '3.8'
    services:
      gateway:
        image: openclaw-local:latest # You will build this locally
        build: .
        restart: unless-stopped
        network_mode: host # Recommended for local discovery, OR port map below
        # ports:
        #   - "18789:18789"
        environment:
          - MOLTBOT_PORT=18789
          - NODE_ENV=production
        volumes:
          - ~/.openclaw:/root/.openclaw # Persist configuration and memory
          - /var/run/docker.sock:/var/run/docker.sock # Optional: if agent needs to spawn sibling containers
  2. Build and Run:

    git clone https://github.com/openclaw/openclaw.git
    cd openclaw
    docker-compose up --build -d

Method 2: Manual Docker Run

If you just want to run the container quickly.

# Build the image
docker build -t openclaw .

# Run container (mapping port and config volume)
docker run -d 
  --name openclaw 
  -p 18789:18789 
  -v ~/.openclaw:/root/.openclaw 
  openclaw

Method 3: Docker Sandbox for Agents

To run sub-agents (not the gateway itself) in isolated containers:

  1. Ensure Docker is installed on the host.
  2. In ~/.openclaw/openclaw.json, set:
    {
    	"drivers": {
    		"sandbox": {
    			"mode": "docker"
    		}
    	}
    }

Post-Setup & Advanced Configuration

1. Connect Channels

After starting the gateway, add channels to talk to your bot.

  • WhatsApp:
    openclaw channel add whatsapp
    (Follow the QR code login flow)
  • Telegram:
    openclaw channel add telegram
    (Paste the Bot Token from BotFather)

2. Remote Access (Securely)

Security Warning: Never expose port 18789 to the open internet.

  • Tailscale (Recommended):
    # Enable Tailscale Serving with authentication
    openclaw gateway --tailscale serve
  • Cloudflare Tunnel: Use cloudflared to tunnel the local port if you prefer standard web access.

3. Local Models (Ollama)

As of Jan 2026, OpenClaw supports auto-discovery for Ollama.

  1. Install and run Ollama (ollama serve).
  2. Enable the provider in ~/.openclaw/openclaw.json:
    {
    	"llm": {
    		"provider": "ollama",
    		"model": "llama3"
    	}
    }

Benchmarks & Performance (Qualitative)

Since OpenClaw is an agent framework rather than a model, performance depends heavily on the underlying LLM (Claude/GPT) and your hardware.

FeatureOpenClaw (Self-Hosted)Auto-GPTCommercial SaaS (Claude/GPT)
PrivacyHigh (Local storage)Low/MediumLow (Cloud logs)
CapabilitiesExtreme (Full OS Access)High (Internet only)Low (Sandboxed Chat)
Setup CostTime (High), Money (Low)Time (High)Time (Null), Money (Sub)
LatencyFast (Local implementation)Slow (Chain loops)Fast
MaintenanceManual UpdatesManualAutomatic

Community Consensus:

  • Pros: “Magical” capability to control physical devices and deep OS integration.
  • Cons: “High friction” to set up; steep learning curve for non-developers; requires valid API keys which can get expensive if loops run wild.

Troubleshooting & Community

  • Doctor Command: Run openclaw doctor to check for missing dependencies or credential issues.
  • Logs: Check ~/.openclaw/logs/gateway.log for errors.
  • Discord: The active community is at discord.gg/openclaw.
  • Updates: Run npm update -g openclaw frequently as the project velocity is very high.

Disclaimer: This guide covers OpenClaw v2026.1.24. The project is evolving daily. Always refer to the official GitHub repository for the latest changes, specifically for new beta features.

Comments

Sign in to join the discussion!

Your comments help others in the community.