Meta Description: Three proven methods to deploy Hermes Agent on the cloud: one-click template, manual Linux setup, and Docker deployment. Learn which approach fits your needs, with full step-by-step instructions for each method.
Target Keywords: deploy hermes agent cloud methods, hermes agent docker cloud, hermes agent manual deployment, hermes agent cloud options, run hermes on cloud server, hermes agent VPS deployment, hermes agent linux cloud setup
Schema Type: HowTo (multi-path variant)
"Deploy Hermes on the cloud" can mean three very different things depending on your situation:
Each approach has genuine trade-offs. This guide covers all three — in honest detail — so you can pick the right method rather than the fastest one.
| Method 1: One-Click Template | Method 2: Manual Linux Setup | Method 3: Docker | |
|---|---|---|---|
| Setup time | 5–10 minutes | 45–90 minutes | 20–40 minutes |
| Technical skill required | Minimal | Intermediate Linux | Docker familiarity |
| Customization | Limited to config files | Full OS-level control | Container-level control |
| Maintenance overhead | Low (managed) | Higher (self-managed) | Medium |
| Best for | Most individual developers | Custom infrastructure needs | Teams already using Docker |
| Cloud platform | Tencent Cloud Lighthouse | Any Linux VPS | Any Docker-capable host |
If you're not sure which to choose: start with Method 1. You can always migrate to a more customized setup later.
This is the path Tencent Cloud officially supports and the only cloud platform with a native Hermes Agent deployment template.
The Lighthouse Hermes Agent template pre-configures:
1. Launch the instance
Go to tencentcloud.com/act/pro/hermesagent and complete the Lighthouse purchase with the Hermes Agent template selected.
Minimum spec: 2-core CPU, 4GB RAM, 60GB SSD. Instance is live in ~90 seconds.
2. Log in via browser terminal
In the Lighthouse console, click "Login" on your running instance. No SSH client needed.
3. Configure your API keys
cd ~/hermes-agent
cp .env.example .env
nano .env
Set at minimum:
LLM_PROVIDER=openai
LLM_API_KEY=your-api-key
LLM_MODEL=gpt-4o
API_AUTH_TOKEN=your-random-token
4. Start the service
sudo systemctl start hermes-agent
sudo systemctl enable hermes-agent
sudo systemctl status hermes-agent
5. Verify
curl http://localhost:8080/health
Done. Your Hermes Agent is running 24/7 in the cloud.
📖 Full post-install configuration: tencentcloud.com/techpedia/143916
✅ Fastest setup path
✅ Officially supported and maintained
✅ No Linux expertise needed
✅ Auto-configured Redis, systemd, networking
❌ Tied to Tencent Cloud Lighthouse
❌ Less flexibility for custom OS configurations
Use this method if you want to deploy on a VPS from any provider (AWS Lightsail, DigitalOcean, Vultr, Hetzner, etc.), or if you need custom OS-level configuration.
1. Update the system
sudo apt update && sudo apt upgrade -y
2. Install system dependencies
sudo apt install -y \
python3.11 \
python3.11-venv \
python3-pip \
redis-server \
git \
curl \
nginx \
build-essential \
libssl-dev \
libffi-dev
3. Clone Hermes Agent
git clone https://github.com/hermesagent/hermes-agent.git ~/hermes-agent
cd ~/hermes-agent
4. Set up Python virtual environment
python3.11 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
5. Configure environment
cp .env.example .env
vim .env
Set your LLM credentials, Redis URL, and API authentication token (same fields as Method 1).
6. Initialize the database and memory backend
source venv/bin/activate
python -m hermes_agent.setup --init-db --init-memory
7. Configure Redis
# Edit Redis config for persistence
sudo vim /etc/redis/redis.conf
Add/uncomment these lines:
appendonly yes
appendfsync everysec
maxmemory 2gb
maxmemory-policy allkeys-lru
sudo systemctl restart redis
sudo systemctl enable redis
8. Create systemd service
sudo tee /etc/systemd/system/hermes-agent.service << 'EOF'
[Unit]
Description=Hermes Agent
After=network.target redis.service
Requires=redis.service
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/hermes-agent
ExecStart=/home/ubuntu/hermes-agent/venv/bin/python -m hermes_agent.main
Restart=always
RestartSec=10
EnvironmentFile=/home/ubuntu/hermes-agent/.env
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable hermes-agent
sudo systemctl start hermes-agent
9. Configure firewall
sudo ufw allow 22/tcp
sudo ufw allow 8080/tcp # Hermes API (restrict to specific IPs in production)
sudo ufw enable
10. Verify full setup
sudo systemctl status hermes-agent
sudo systemctl status redis
curl http://localhost:8080/health
✅ Works on any Linux VPS from any provider
✅ Full OS-level control
✅ Can customize every component
✅ No vendor lock-in
❌ Significantly more setup time
❌ Requires Linux knowledge
❌ You manage all updates and maintenance
❌ No official support for non-Lighthouse deployments
Use Docker if your infrastructure already runs containerized workloads, or if you want clean isolation and easy scaling.
1. Install Docker (if not already installed)
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
2. Create the deployment directory
mkdir ~/hermes-docker && cd ~/hermes-docker
3. Create docker-compose.yml
version: '3.8'
services:
hermes-agent:
image: hermesagent/hermes:latest
container_name: hermes-agent
restart: unless-stopped
ports:
- "8080:8080"
environment:
- LLM_PROVIDER=${LLM_PROVIDER}
- LLM_API_KEY=${LLM_API_KEY}
- LLM_MODEL=${LLM_MODEL}
- REDIS_URL=redis://redis:6379
- API_AUTH_TOKEN=${API_AUTH_TOKEN}
- WECOM_ENABLED=${WECOM_ENABLED:-false}
- WECOM_CORP_ID=${WECOM_CORP_ID:-}
volumes:
- hermes_memory:/home/hermes/.hermes
- hermes_skills:/home/hermes/.hermes/skills
depends_on:
redis:
condition: service_healthy
redis:
image: redis:7-alpine
container_name: hermes-redis
restart: unless-stopped
command: >
redis-server
--appendonly yes
--appendfsync everysec
--maxmemory 1gb
--maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
volumes:
hermes_memory:
hermes_skills:
redis_data:
4. Create .env file
cat > .env << 'EOF'
LLM_PROVIDER=openai
LLM_API_KEY=your-api-key-here
LLM_MODEL=gpt-4o
API_AUTH_TOKEN=your-random-32-char-token
WECOM_ENABLED=false
EOF
chmod 600 .env
5. Pull and start containers
docker compose pull
docker compose up -d
6. Verify containers are running
docker compose ps
docker compose logs hermes-agent --tail 50
curl http://localhost:8080/health
7. Update Hermes Agent
When a new version releases:
docker compose pull
docker compose up -d
Your memory and skills (stored in named volumes) are preserved across updates.
You can combine Docker with Lighthouse: use the Lighthouse Docker CE template to get a pre-configured Docker environment, then run the Docker Compose setup above. This gives you both the ease of Lighthouse provisioning and the flexibility of containerized deployment.
👉 Lighthouse Docker CE template — select "Docker CE" as the application image.
✅ Clean isolation — Hermes runs in its own container
✅ Easy version management and updates
✅ Portable — same compose file works on any Docker host
✅ Data persists in named volumes even through container updates
❌ Requires Docker familiarity
❌ Adds a layer of abstraction for debugging
❌ Slightly higher resource overhead than native install
Are you comfortable with Linux administration?
├── No → Method 1 (Tencent Cloud Lighthouse one-click)
└── Yes, continue:
Do you already use Docker in your infrastructure?
├── Yes → Method 3 (Docker deployment)
└── No, continue:
Do you need to use a specific VPS provider (not Tencent Cloud)?
├── Yes → Method 2 (Manual Linux setup)
└── No → Method 1 (still the simplest path)
For 80% of individual developers: Method 1 on Tencent Cloud Lighthouse. It's the only officially supported path, it's the fastest, and it gives you a 24/7 running agent with the least friction.
If you start with Method 1 and later want to migrate to Method 2 or 3, your agent's accumulated memory and skills are portable:
# On the source instance: export agent state
redis-cli SAVE
cp /var/lib/redis/dump.rdb ~/hermes_redis_backup.rdb
cp -r ~/.hermes/episodes.db ~/.hermes/skills/ ~/hermes_state_backup/
# Transfer to new server
scp ~/hermes_redis_backup.rdb user@new-server:~/
scp -r ~/hermes_state_backup/ user@new-server:~/
# On new server: restore
cp ~/hermes_redis_backup.rdb /var/lib/redis/dump.rdb
sudo systemctl restart redis
cp ~/hermes_state_backup/episodes.db ~/.hermes/
cp -r ~/hermes_state_backup/skills/ ~/.hermes/
Your agent picks up where it left off — accumulated knowledge intact.
Whichever method you choose, the starting point is the same:
🚀 Deploy Hermes Agent on Tencent Cloud Lighthouse — one-click, live in 2 minutes (Method 1)
📖 Complete post-deployment configuration tutorial — applies to all three methods
Last updated: April 2025 | Category: Hermes Agent, Cloud Deployment, DevOps
Related: [How to Deploy Hermes on the Cloud: The Definitive Guide] | [Hermes Agent Cloud Deployment Checklist] | [Hermes Agent Cloud vs Local Deployment]