OpenClaw runs on Linux (arm64, amd64, armv7) and macOS. Minimum:
# Update system
sudo apt update && sudo apt upgrade -y
# Install Node.js 22+
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
# Install OpenClaw
npm install -g openclaw
# Initialize agent
openclaw init --non-interactive
openclaw pairing approve telegram CODE_HERE
# Start
openclaw agent run
# Update system
sudo apt update && sudo apt upgrade -y
# Install Node.js 22+
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
# Install OpenClaw
npm install -g openclaw
# Initialize agent
openclaw init --non-interactive
openclaw pairing approve telegram CODE_HERE
# Start
openclaw agent run
# Install Homebrew if needed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Node.js
brew install node@22
# Install OpenClaw
npm install -g openclaw
# Initialize agent
openclaw init --non-interactive
openclaw pairing approve telegram CODE_HERE
# Start
openclaw agent run
Edit ~/.openclaw/openclaw.json:
{
"agent": {
"name": "MyAgent",
"model": "anthropic/claude-sonnet-4-5"
},
"channels": {
"telegram": { "enabled": true }
},
"plugins": {
"entries": {
"telegram": { "enabled": true }
}
}
}
OpenClaw can use both cloud APIs (Anthropic Claude, OpenAI) and local models (Ollama, vLLM).
For cost savings (85% reduction), run local Ollama inference:
# Install Ollama
curl https://ollama.ai/install.sh | sh
# Download a model
ollama pull qwen2.5:7b
# OpenClaw will auto-detect on localhost:11434
See Local LLM Inference Guide for 20+ hardware tiers (Raspberry Pi to H100) with:
Edit openclaw.json:
{
"agent": {
"name": "MyAgent",
"model": "ollama/qwen2.5:7b" ← Switch to local model
},
"providers": {
"ollama": {
"endpoint": "http://localhost:11434",
"fallback": "anthropic/claude-sonnet-4-5" ← Cloud fallback if Ollama down
}
}
}
For cost savings, use local Ollama inference:
# Install Ollama
curl https://ollama.ai/install.sh | sh
# Run locally
ollama run qwen2.5:7b
# Configure OpenClaw to use local inference
# See: Local LLM Inference guide in AI Hub
sudo usermod -a -G dialout $USER