我来为你提供详细的 OpenClaw 喂饭教程,帮助你快速在 Linux 环境下完成安装和 WebUI 配置。
# 检查 Python 版本
python3 --version
# 检查 CUDA(如果使用 GPU)
nvidia-smi
# 检查 pip
pip3 --version
# 克隆仓库
git clone https://github.com/OpenClaw/OpenClaw.git
cd OpenClaw
# 运行安装脚本
chmod +x install.sh
./install.sh
# 1. 创建虚拟环境
python3 -m venv openclaw_env
source openclaw_env/bin/activate
# 2. 安装 PyTorch(根据你的 CUDA 版本选择)
# CUDA 11.7
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# 或 CPU 版本
pip3 install torch torchvision torchaudio
# 3. 安装 OpenClaw
git clone https://github.com/OpenClaw/OpenClaw.git
cd OpenClaw
pip install -r requirements.txt
# 4. 安装 WebUI 依赖
pip install gradio flask
# 进入模型目录
cd models
# 下载基础模型(示例)
wget https://huggingface.co/openclaw/models/resolve/main/openclaw-base.bin
# 或使用官方下载脚本
python download_models.py --model base --save-dir ./models
创建 config.yaml:
model:
name: "openclaw-base"
path: "./models/openclaw-base.bin"
device: "cuda" # 或 "cpu"
webui:
host: "0.0.0.0"
port: 7860
share: false
创建 start_webui.py:
import gradio as gr
from openclaw import OpenClaw
# 初始化模型
claw = OpenClaw(model_path="./models/openclaw-base.bin")
def respond(message, history):
response = claw.generate(message)
return response
# 创建 WebUI
demo = gr.ChatInterface(
fn=respond,
title="OpenClaw Chatbot",
description="基于 OpenClaw 的对话 AI"
)
if __name__ == "__main__":
demo.launch(
server_name="0.0.0.0",
server_port=7860,
share=False
)
# 方式一:使用脚本
python start_webui.py
# 方式二:使用内置服务器
python -m openclaw.webui --config config.yaml
打开浏览器访问:
http://localhost:7860http://[你的IP地址]:7860创建 /etc/systemd/system/openclaw.service:
[Unit]
Description=OpenClaw WebUI Service
After=network.target
[Service]
Type=simple
User=your_username
WorkingDirectory=/path/to/OpenClaw
Environment="PATH=/path/to/openclaw_env/bin"
ExecStart=/path/to/openclaw_env/bin/python start_webui.py
Restart=always
[Install]
WantedBy=multi-user.target
启用服务:
sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw
server {
listen 80;
server_name your_domain.com;
location / {
proxy_pass http://127.0.0.1:7860;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
# 使用量化模型
wget https://huggingface.co/openclaw/models/resolve/main/openclaw-base-4bit.bin
# 设置 GPU 内存限制
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
# 修改端口
python start_webui.py --port 8080
# 重新安装依赖
pip install --upgrade -r requirements.txt
# 安装特定版本
pip install gradio==3.50.2
创建 quick_install.sh:
#!/bin/bash
echo "开始安装 OpenClaw..."
# 更新系统
sudo apt update
sudo apt install -y python3-pip python3-venv git wget
# 创建虚拟环境
python3 -m venv openclaw_env
source openclaw_env/bin/activate
# 安装 PyTorch
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# 克隆仓库
git clone https://github.com/OpenClaw/OpenClaw.git
cd OpenClaw
# 安装依赖
pip install -r requirements.txt
pip install gradio
# 下载模型
mkdir -p models
cd models
wget https://huggingface.co/openclaw/models/resolve/main/openclaw-base.bin
echo "安装完成!"
echo "启动命令:"
echo "cd /path/to/OpenClaw"
echo "source openclaw_env/bin/activate"
echo "python start_webui.py"
运行测试脚本:
python -c "
from openclaw import OpenClaw
claw = OpenClaw(model_path='./models/openclaw-base.bin')
response = claw.generate('你好')
print('测试成功!响应:', response)
"
这样你就完成了 OpenClaw 在 Linux 环境下的完整安装和 WebUI 配置!如果遇到任何问题,可以查看项目的 GitHub Issues 或官方文档。