Q: How do I deploy OpenClaw on Kubernetes using the Helm chart?
Add the Helm repository: helm repo add openclaw https://charts.openclaw.dev && helm repo update. Install with defaults: helm install openclaw openclaw/openclaw -n openclaw --create-namespace. Customise values: helm install openclaw openclaw/openclaw -n openclaw --create-namespace -f my-values.yaml. Verify: kubectl get pods -n openclaw. The API server pod should reach Running state within 60 seconds. Access the API: kubectl port-forward svc/openclaw-api 7400:7400 -n openclaw.
Q: How do I configure horizontal autoscaling for OpenClaw simulation pods?
In your Helm values file set: simulator: autoscaling: enabled: true minReplicas: 1 maxReplicas: 20 targetCPUUtilizationPercentage: 70. Apply: helm upgrade openclaw openclaw/openclaw -n openclaw -f values.yaml. The HPA controller will scale simulation pods between 1 and 20 based on CPU load. For GPU-accelerated simulation, configure KEDA with a custom metric from OpenClaw's Prometheus endpoint instead of CPU-based scaling.
Q: How do I manage OpenClaw credentials securely in Kubernetes?
Use Kubernetes Secrets: kubectl create secret generic openclaw-ai-keys --from-literal=OPENAI_API_KEY=your_key -n openclaw. Reference in values.yaml: envFrom: - secretRef: name: openclaw-ai-keys. For GitOps-safe secret management, use External Secrets Operator with AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager as the backend. Never commit API keys in values.yaml or in Git history.
Q: How do I monitor OpenClaw metrics with Prometheus and Grafana on Kubernetes?
Enable metrics in values.yaml: metrics: enabled: true serviceMonitor: enabled: true. Install kube-prometheus-stack: helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring --create-namespace. The OpenClaw ServiceMonitor will be picked up automatically. Import the OpenClaw Grafana dashboard from openclaw.dev/grafana-dashboard.json. Key metrics: openclaw_command_duration_seconds_bucket (latency histogram), openclaw_joint_error_total (error counter), openclaw_simulator_fps (simulation performance).
Q: How do I perform a zero-downtime upgrade of OpenClaw on Kubernetes?
Update the image tag in values.yaml: image: tag: "3.2.0". Run helm upgrade openclaw openclaw/openclaw -n openclaw -f values.yaml. The rolling update strategy terminates old pods only after new pods pass readiness probes. The preStop hook (terminationGracePeriodSeconds: 30) gives in-flight commands 30 seconds to complete before the pod exits. Monitor the rollout: kubectl rollout status deployment/openclaw-api -n openclaw. Rollback if needed: helm rollback openclaw -n openclaw.
Q: How do I manage a fleet of 50 robots with OpenClaw on Kubernetes?
Deploy one OpenClaw API server per robot namespace (or use namespace-per-team with shared simulation infrastructure). Use a Kubernetes operator (openclaw-operator, available from the Helm chart) to manage robot CRDs: apiVersion: openclaw.dev/v1 kind: Robot metadata: name: arm-001 spec: model: ur5e ip: 192.168.10.101. The operator handles connection pooling, health checks, and automatic reconnection. A single 3-node Kubernetes cluster handles 50 robot connections with OpenClaw API pods at roughly 200 MB RAM per robot namespace.
Q: What Kubernetes resource requests and limits should I set for OpenClaw?
Recommended values.yaml resource settings: API server: requests: cpu: 250m, memory: 256Mi; limits: cpu: 1000m, memory: 512Mi. Simulator pod (CPU): requests: cpu: 2000m, memory: 2Gi; limits: cpu: 4000m, memory: 4Gi. Simulator pod (GPU): requests: nvidia.com/gpu: 1; limits: nvidia.com/gpu: 1 (also set CPU and memory as above). Hardware bridge (real-time sensitive): requests: cpu: 500m, memory: 128Mi with CPU pinning via cpuManager policy: static.
Q: How do I expose the OpenClaw API outside the Kubernetes cluster?
For internal team access: use kubectl port-forward or configure an Ingress: values.yaml: ingress: enabled: true className: nginx hosts: - host: openclaw.internal.example.com paths: - path: / pathType: Prefix. Add TLS: tls: - secretName: openclaw-tls hosts: - openclaw.internal.example.com. For robot hardware outside the cluster, use a NodePort or LoadBalancer service type specifically for the hardware bridge port (7401) to avoid routing physical robot traffic through the Ingress.