💾 Archived View for erock.io › 2021 › 01 › 23 › docker-disk-out-of-space captured on 2023-01-29 at 16:03:58. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-07-16)
-=-=-=-=-=-=-
Stardate 2021-01-23
tl;dr: If you are deploying docker containers to production, be sure to run
`docker system prune -f` before pulling new images so you don't run out of
disk space.
When I'm building a new project, I generally learn towards using docker-compose
during development.
When coupled with docker-machine I have a quick and easy way to deploy my
containers to the cloud provider of my choice.
Overall, it works really well, but there's one important thing to consider when
using docker for production: running out of disk space.
Images, containers, networks, and volumes will continue to grow on a production
VM which will inevitably lead to the hard drive running out of memory. This
seems obvious, but it wasn't obvious to me when I first started using
`docker-machine`.
I recently built a new app (listifi) to google cloud compute and ran into
out-of-memory issues after a bunch of iterations. The default VM only has around
15GB.
Here's a brief overview of my deployment process:
- Develop using `docker-compose` locally
- Build features locally
- Use a production tuned `docker-compose` yml file to build images locally
- Push the images to Google's Container Registry
- Then I run `eval $(docker-machine <name> env)` to tunnel into my production
- VM's docker
- Then I run `docker-compose -f production.yml pull --ignore-pull-failures` to
- download the new images
- Then I run `docker-compose -f production.yml up --no-deps -d` to restart my
containers with the new images
This process works great. I don't need to setup CI for a new project but it
still provides me with the flexibility to deploy to my own VM and inspect its
health with docker.
Things are working great, I'm iterating on feature development and deploying in
between major changes. Only, the last deploy I tried to perform failed. The
reason: hard drive is out of space. Hmm, my VM has 16GB of diskspace, why is it
out of memory?
When I run `docker system df` the problem becomes clear: I have unused images
soaking up all of my hard drive space. I have scoured docker deployment
strategies and never came across documentation that references this issue. There
are plenty of StackOverflow issues referencing the problem with a solution, but
I never really made the connection to my production VM until I hit this problem.
Before I pull my new images in production, I run another command:
docker system prune -f
documentation for system prune
Once I added that to my deployment step, things are working much more smoothly.
I completely forgot that as I deployed new changes to production, there was
lingering old docker images laying dormant on my production VM, increasing in
size of time.
It never clicked for me until I ran into the problem, hopefully this short blog
article will help others avoid this problem in the future as well.
--
Want to chat? email me at gemlog@erock.io