Skip to main content
All posts

Coolify in Production: What the Docs Don't Tell You


If you’re evaluating self-hosting platforms, Coolify belongs near the top of your list. It gives you a clean UI to deploy Docker Compose stacks, manage SSL certificates via Traefik, set environment variables, and monitor services — all without writing Kubernetes configs or maintaining a custom deployment setup. For solo developers and small teams running open-source tools on a Hetzner or DigitalOcean VPS, it’s genuinely one of the better tools out there.

We run it across multiple client and internal setups. Listmonk, Twenty CRM, Plausible, n8n, Vaultwarden, NocoDB — the full self-hosted stack. It handles all of it.

That said, production has a learning curve. Not because Coolify is badly designed, but because some of its behaviors are non-obvious and not well documented. We’ve compiled the ones that will cost you the most time if you hit them blind.

1. Editing docker-compose.yml manually breaks redeployment

This is the most important one. When Coolify deploys a service, it generates a docker-compose.yml from its own internal template. If you SSH into the VPS and manually edit that file — to add a Traefik label, mount a volume, or adjust a command — do not redeploy through the Coolify UI afterwards.

When you trigger a redeploy, Coolify regenerates the compose file from scratch, overwriting every change you made.

The fix is simple once you know it: after making manual changes, apply them directly via SSH instead:

cd /data/coolify/services/<SERVICE_UUID>/
docker compose up -d <container_name>

Coolify will then show a banner saying “The latest configuration has not been applied.” That’s cosmetic — ignore it. Your manual changes are live.

This applies to any service that requires a custom setup not covered by Coolify’s UI, such as adding --static-dir arguments to a Listmonk command or custom Traefik routing rules for multi-container apps.

2. Restarting via the API regenerates .env

Coolify has an API, and using it to restart a service is convenient. There’s a catch: GET /api/v1/services/{uuid}/restart doesn’t just restart the containers — it regenerates the .env file from Coolify’s internal template first. Any manual edits you made to .env get overwritten silently.

This catches people who fix an environment variable by SSHing in and editing the file directly, then trigger a restart via automation or the API, and find the fix is gone.

The correct pattern for any .env fix you want to persist: SSH in, edit .env, then restart the specific container manually:

cd /data/coolify/services/<SERVICE_UUID>/
docker compose restart <container_name>

If the fix needs to survive future Coolify redeploys, set the value in Coolify’s environment variable UI instead of .env directly.

3. SERVICE_URL_* variables default to HTTP

When Coolify deploys a new service, it auto-generates SERVICE_URL_* environment variables pointing to http://<uuid>.<VPS_IP>.sslip.io. Note: HTTP, not HTTPS.

For internal container communication this is fine. But many apps use these URLs as their own public-facing base URL — Twenty CRM uses it as SERVER_URL, Rybbit uses it for API calls from the browser. When the app is served over HTTPS but tries to make requests to an HTTP URL, the browser blocks it as mixed content. The result is often a cryptic “Unable to Reach Back-end” error or a completely blank page, with nothing useful in the logs.

The fix:

sed -i 's|SERVICE_URL_APPNAME=http://.*|SERVICE_URL_APPNAME=https://yourdomain.com|' /data/coolify/services/<UUID>/.env
docker compose up -d <container_name>

The unfortunate part: this needs to be re-applied after every Coolify redeploy of that service, since .env is regenerated each time.

4. Coolify backup for compose-stack databases silently fails

Coolify has a built-in backup feature that backs up databases to S3. It works reliably for databases deployed as Standalone Databases through Coolify’s Databases section. For databases inside Docker Compose stacks — which is how all the one-click services (Listmonk, Twenty, Plausible, n8n, NocoDB) deploy — the backup tab either doesn’t appear at all, or runs and reports “Success” while the S3 upload silently fails.

We verified this against a Cloudflare R2 bucket: Coolify reported a successful backup on every run, the bucket remained empty.

For compose-stack databases, use PGBackWeb instead. It’s available as a Coolify one-click service, connects directly to any Postgres container on the VPS via Docker networks, and actually delivers the files to S3. The setup takes about 20 minutes and the web UI is clean.

For services that use file or SQLite storage (Vaultwarden, Uptime Kuma), add the offen/docker-volume-backup sidecar to their compose stacks via SSH.

5. Backup schedules reset after Coolify upgrades

A smaller one, but worth noting: after a Coolify update, check that your scheduled backup jobs are still active. We’ve seen schedules silently reset to disabled after an upgrade. Easy to miss until you realize your backups have a gap.

Put a reminder on your monthly maintenance list to verify scheduled jobs are still running.

What this looks like in practice

None of these are showstoppers. Once you know them, working around them becomes second nature. Coolify’s actual value — one-click deploys, automatic SSL, a clean service overview, readable logs, environment variable management — is real and significant. For a Hetzner VPS running 8–12 self-hosted services, it’s far better than managing raw Docker Compose manually or maintaining an equivalent shell-script setup.

The issues above are edge cases that surface in production but rarely in initial testing. That’s what makes them expensive in time: you won’t hit them until you’re deep in a live setup.

If you’re running a stack like this for your business or for clients, knowing these patterns upfront saves a lot of debugging hours. If you’d rather not spend those hours at all, that’s what we’re here for.

Want to learn more?

See how we set up and operate GDPR-compliant self-hosted infrastructure at a fraction of SaaS costs.

Learn more