Table of contentsClick link to navigate to the desired location
This content has been automatically translated from Ukrainian.
You deploy the application on Coolify, the build passes successfully… and suddenly at the image export stage:
ERROR: failed to extract layer sha256:... write /var/lib/containerd/.../File/dirname-c.ri: no space left on device
The build has completed, but there is nowhere to write the image - the disk is full.
🤔 Why this happens
Docker accumulates junk with each deployment:
- Old images - each build creates a new image, old ones remain
- Build cache - intermediate layers of the build are never automatically deleted
- Stopped containers - undeleted containers from previous deployments
- Unused volumes - remnants from deleted services
On a small server (4–8 GB disk), this can fill up the disk within a few weeks of active development.
Especially if you are running Rails + Sidekiq + Redis + Elasticsearch.
🔎 Diagnosis
Check the disk status:
df -h
See how much space Docker is using:
docker system df
Usually, you will see tens of gigabytes in the Images or Build Cache section.
🛠 Fix
docker system prune -a --volumes
This command removes:
- All stopped containers
- All networks that are not in use
- All images without active containers
- All build cache
- All volumes without active containers (--volumes)
⚠️ Attention
--volumes will also delete data in volumes.
If your database (PostgreSQL / MySQL) is running through a Docker volume without an external backup - first make a dump.
🧼 Prevention
To avoid getting into this situation regularly:
Add a cron for automatic cleanup
Every week at 3:00 AM (without volumes, to avoid affecting the DB):
0 3 * * 0 docker system prune -af
Increase the disk
If you are running:
- Rails
- Elasticsearch
- Redis
- Sidekiq
4 GB disk is not enough.Comfortable minimum: 20-40 GB.
Limit build cache
docker builder prune --keep-storage 2GB
This allows you to keep the cache but prevents it from growing uncontrollably.
If the deployment fails with no space left on device - this is not a bug in the application.Docker has simply accumulated too much old data.
One command:
docker system prune -a --volumes
solves the problem in seconds.
But it's better to set up automatic cleanup so you don't have to think about it during the next release
This post doesn't have any additions from the author yet.