Ollama Integration
Run local AI models with Ollama and OpenClaw. No API key required.
Overview
Ollama lets you run AI models locally on your own hardware. No API key needed, full privacy.
Available Models
- Llama 3.1 — Meta's open-source model
- Mistral — Fast and efficient
- Qwen 2.5 — Alibaba's multilingual model
- Deepseek — Strong coding model
- Phi-3 — Microsoft's compact model
Install Ollama
# macOS / Linux
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3.1Configuration
{
"agents": {
"default": {
"provider": "ollama",
"model": "llama3.1"
}
}
}Environment Variables
export OLLAMA_BASE_URL="http://localhost:11434"Next Steps
- Integration Overview — All supported providers
- AI Agent — Agent configuration