Skip to content

iMerica/kubeclaw

██╗  ██╗██╗   ██╗██████╗ ███████╗ ██████╗██╗      █████╗ ██╗    ██╗
██║ ██╔╝██║   ██║██╔══██╗██╔════╝██╔════╝██║     ██╔══██╗██║    ██║
█████╔╝ ██║   ██║██████╔╝█████╗  ██║     ██║     ███████║██║ █╗ ██║
██╔═██╗ ██║   ██║██╔══██╗██╔══╝  ██║     ██║     ██╔══██║██║███╗██║
██║  ██╗╚██████╔╝██████╔╝███████╗╚██████╗███████╗██║  ██║╚███╔███╔╝
╚═╝  ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝ ╚═════╝╚══════╝╚═╝  ╚═╝ ╚══╝╚══╝

Production-grade OpenClaw on Kubernetes.

CI Chart Version Kubernetes 1.25+ Helm 3.12+ License Alpha OCI Registry Trivy kubeconform kube-linter


Prerequisites

  • Any Kubernetes Cluster running 1.25+
  • Helm 3.12+

Why KubeClaw

KubeClaw wraps OpenClaw with the operational guardrails that production deployments need: secure defaults, pinned images, predictable upgrades, egress filtering, and batteries-included observability so production feels deterministic. It uses the cluster as the control plane to make behavior visible and controllable by default. Wide Events unify observability, digest pinning prevents drift, and Blocky-backed DNS egress controls enforce a default-deny outbound posture with explicit allow/deny lists and query logging. The result is fewer trust gaps: what ran, what changed, what it called, and what it emitted are all auditable.

Architecture

graph TB
    subgraph clients [" "]
        direction LR
        app([fa:fa-desktop macOS App])
        cli([fa:fa-terminal CLI])
        web([fa:fa-globe Web UI])
        tsdevice([fa:fa-network-wired Tailnet Device])
    end

    subgraph cluster ["Kubernetes Cluster"]
        ingress[fa:fa-shield-halved Ingress / K8s Gateway API]

        subgraph gwpod ["OpenClaw Gateway StatefulSet · replicas: 1"]
            gw[fa:fa-server OpenClaw Gateway :18789]
            ts[fa:fa-lock Tailscale SSH]
        end

        subgraph chromepod ["Chromium Deployment"]
            chrome[fa:fa-window-maximize Chromium :9222]
        end

        subgraph clickstack ["ClickStack (Wide Events)"]
            otelgw[fa:fa-tower-broadcast OTel Collector]
            clickhouse[(fa:fa-database ClickHouse)]
            hyperdx[fa:fa-chart-line HyperDX UI]
            otelgw -->|write| clickhouse
            hyperdx -->|query| clickhouse
        end

        otelnode[fa:fa-microchip OTel Node · DaemonSet]
        otelcluster[fa:fa-cubes OTel Cluster · Deployment]

        svc[[fa:fa-diagram-project Gateway Service :18789]]
        chromesvc[[fa:fa-diagram-project Chromium Service :9222]]
        litellm[fa:fa-route LiteLLM Proxy :4000]
        egressfilter[fa:fa-filter Egress Filter :53]
        pvc[(fa:fa-database PVC)]
        obsidianpvc[(fa:fa-book Obsidian Vault)]
    end

    subgraph external [" "]
        direction LR
        llm([fa:fa-brain LLM APIs])
        msg([fa:fa-comments Messaging Providers])
    end

    app & cli & web -->|WS / HTTP| ingress
    tsdevice -->|SSH| ts
    ingress --> svc --> gw
    gw -->|CDP| chromesvc --> chrome
    gw -->|HTTP| litellm
    gw -.->|DNS| egressfilter
    gw ---|state| pvc
    gw ---|tasks| obsidianpvc
    gw -->|outbound| msg
    litellm -->|HTTPS| llm
    gw -->|OTLP| otelgw
    otelnode -->|OTLP| otelgw
    otelcluster -->|OTLP| otelgw
Loading

Install

One-line installer (recommended)

curl -fsSL https://kubeclaw.ai/install.sh | bash

Via OCI (manual)

helm install kubeclaw oci://ghcr.io/imerica/kubeclaw \
  --namespace kubeclaw \
  --create-namespace \
  --set secret.data.OPENCLAW_GATEWAY_TOKEN=change-me

Configuration

All values are documented inline in charts/kubeclaw/values.yaml. The minimum required values are:

Key Notes
secret.data.OPENCLAW_GATEWAY_TOKEN Required. Strong random string; treat as a password
tailscale.ssh.authKey Required when tailscale.ssh.enabled (unless authKeySecretName is set)
litellm.masterkey Required when litellm.enabled (default). Must start with sk-

Full configuration reference, advanced examples, and per-feature setup: Install Guide · kubeclaw.ai/docs

Image pinning policy: each chart release is validated against a candidate image, then the chart defaults are updated to the exact image.tag + image.digest before publishing.

What You Get

Feature Description
StatefulSet Durable PVC-backed storage at /home/node/.openclaw
GitOps-friendly config Declare desired openclaw.json; chart handles merge or overwrite via initContainer
WebSocket-ready Ingress Configurable TLS
K8s Gateway API routing Single-hostname path-based routing for all services via gateway.networking.k8s.io/v1 HTTPRoutes; optional bundled Envoy Gateway controller
Split workspace volume Separate PVC for workspace via persistence.splitVolumes
Chromium Deployment Browser automation via standalone Deployment + ClusterIP Service on port 9222 (cluster-internal)
LiteLLM proxy subchart Per-agent virtual keys, budget caps, model fallback routing, and semantic caching
Wide Events observability Logs, metrics, traces, and Kubernetes events unified in ClickHouse via the Wide Events pattern, replacing separate logging, metrics, and tracing backends. Ships with HyperDX for search and dashboards, and OpenTelemetry collectors for zero-config cluster-wide collection
Egress DNS filter NextDNS-style DNS filtering via Blocky, including threat blocklists (HaGeZi, StevenBlack), country TLD blocking, and query logging
Container hardening Non-root UID, read-only root filesystem, no privilege escalation, all capabilities dropped, and RuntimeDefault seccomp profile
NetworkPolicy Scaffolding for locking down traffic
S3 Backup Scheduled and pre-delete backups of Gateway state to S3-compatible storage via rclone
Diagnostics CronJob Periodic openclaw doctor runs
Skills system Declarative skill install at deploy time; supports playbooks, clawhub, and npm registries, including a default GitHub skill for PR/issue workflows
Tools system Reusable tools-init installer for in-pod CLIs; ships with gh by default and is extensible for additional tools
Obsidian vault PVC-backed markdown vault mounted at /vaults/obsidian; wired to the Obsidian skill for task management
Tailscale integration Expose the Gateway onto your tailnet without public ingress (tailscale.expose), and/or SSH into the pod from any enrolled device (tailscale.ssh)

Docs

Install Guide Step-by-step setup
Verify Lint, render, and schema checks
Troubleshooting Common issues and fixes
Restore Runbook Backup & recovery procedures
Full Documentation Complete reference at kubeclaw.ai

Community

Vision Project direction and priorities
Contributing How to propose and ship changes
Security Policy Private vulnerability reporting process
Code of Conduct Community expectations
Support Where to ask for help

KubeClaw Enterprise

Need multi-tenancy, enterprise egress controls, SSO, policy-as-code, CSI-backed secrets, backup hooks, or signed OCI distribution? See kubeclaw.ai.

License

Apache 2.0

About

Production-grade Helm chart for running OpenClaw in Kubernetes

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors