Quick start
- Install — If OpenClaw isn’t installed, Ollama prompts to install it via npm
- Security — On the first launch, a security notice explains the risks of tool access
- Model — Pick a model from the selector (local or cloud)
- Onboarding — Ollama configures the provider, installs the gateway daemon, sets your model as the primary, and enables OpenClaw’s bundled Ollama web search
- Gateway — Starts in the background and opens the OpenClaw TUI
OpenClaw requires a larger context window. It is recommended to use a context window of at least 64k tokens if using local models. See Context length for more information.
Previously known as Clawdbot.
ollama launch clawdbot still works as an alias.Web search and fetch
OpenClaw ships with a bundled Ollamaweb_search provider that lets local or cloud-backed Ollama setups search the web through the configured Ollama host.
Ollama web search for local models requires
ollama signin.Configure without launching
To change the model without starting the gateway and TUI:Recommended models
Cloud models:kimi-k2.5:cloud— Multimodal reasoning with subagentsqwen3.5:cloud— Reasoning, coding, and agentic tool use with visionglm-5.1:cloud— Reasoning and code generationminimax-m2.7:cloud— Fast, efficient coding and real-world productivity
gemma4— Reasoning and code generation locally (~16 GB VRAM)qwen3.5— Reasoning, coding, and visual understanding locally (~11 GB VRAM)
Non-interactive (headless) mode
Run OpenClaw without interaction for use in Docker, CI/CD, or scripts:--yes flag auto-pulls the model, skips selectors, and requires --model to be specified.


