Input on Docker Compose Best Practices

I am looking into moving to containers, docker and docker-compose and away from installing everything with a package manager. I have spent some time learning this topic and have a few questions. These questions are more about opinions and best practices. I am curious to see what everyone else thinks as I try to come up with what would work best for me on my home server/lab.

  • Should you run your docker commands as root (e.g. with sudo) or use a user that is part of the docker group.
  • Is it better to have one big docker-compose.yml file, or have one for each application/stack?
  • Where should you save your docker-compose files? I have seen /opt/containers/, /opt/appdata, and putting them in your home directory. For some reason, I don’t seem to like the idea of using your home directory.
  • What is the best practice for keeping your containers up to date and making sure security upgrades are applied right away. I have seen docker compose pull && docker compose up -d && docker system prune as well as “Watchtower” and “What’s Up Docker”. This is the part I am struggling with the most, as there does not seem to be something as clear as using a package manager and unattended-upgrades.

Thank you

Doesn’t make much of a difference. Just be careful what you run as root, especially things copy-pasted from the internet and you’ll be fine without the docker group. Depends on your security concern / threat model.

Wherever you can find them easily. If you have a git directory, that also works and you get the benefit of modifying them and keeping a history of changes.

Can’t tell as far as unattended updates are concerned. Best to get news via rss or git when a new version comes up and take either apply updates as they come, or automate them when you detect a new version from these sources.

Personally I don’t like automatic updates. I like notifications about updates though. Whenever I see a notification, I update and then check what needs a service restart.

@ThatGuyB Thank you for the response. What do you uses to receive notifications?

You might find some of this helpful:

1 Like
  • Should you run your docker commands as root (e.g. with sudo) or use a user that is part of the docker group.

I believe you must use sudo. I have never been able to do it otherwise

  • Is it better to have one big docker-compose.yml file, or have one for each application/stack?

I guess it depends. If I am installing something like wordpress and pairing it with a mariadb and myphp admin, I would rather have one big file

  • Where should you save your docker-compose files? I have seen /opt/containers/, /opt/appdata, and putting them in your home directory. For some reason, I don’t seem to like the idea of using your home directory.

I put them in sub directories in my home folder, and execute each from its own sub directory. It may not be a best practice, just what I do

  • What is the best practice for keeping your containers up to date and making sure security upgrades are applied right away. I have seen docker compose pull && docker compose up -d && docker system prune as well as “Watchtower” and “What’s Up Docker”. This is the part I am struggling with the most, as there does not seem to be something as clear as using a package manager and unattended-upgrades.

I use Portainer. I have the always pull command active, and I can just take a container down then put it back up and Portainer will pull the newest image. TechnoTim did a nice video about how to automate this process, but I haven’t automated it in my environment yet

There’s some stuff I prefer to not run in docker. I find I have two main headaches with docker: First, networking is a bear at times. As a result, I prefer to run Wordpress and Nextcloud in a VM (each in their own VM for security and isolation purposes). This way I can more easily automate the update process using Ansible. It also makes the networking a LOT easier. The other issue I have not fully conquered is setting up NFS shares for my docker volumes. I have gotten it to work but it is a giant pain in the butt. I expose both wordpress and nextcloud to the internet behind cloudflare tunnels. I use NFS shares to store/back up their data, and I use Proxmox snapshots and backups for the VMs. It is just a TON easier to manage this way. At least for me. I had gotten both cloudflare and wordpress running in docker, but it was a lot harder and I wasn’t having fun with it. This is a hobby for me, not a job, so since my server has plenty of resources, I run them in a VM for the sake of simplicity and making my life easier.

But there are other software packages that I run in docker, like Grocy, Mealie, Photoprism, home assistant, and the cloudflare connectors.

1 Like

Currently only my window manager for my system. At my old workplace, used to have a Centreon setup that monitored when updates were available. I’d do monthly updates for some servers, like our gitlab and jenkins.

I wouldn’t recommend Centreon. It’s a good product, but based on Nagios and kinda legacy. Of course, writing modules for it is easy because it’s just shell scripts, but that’s another story. You could use Prometheus + AlertManager, or just Zabbix and have your update check scripts. With some automation, you could click a button and “fix” it by running the updates.

Depends on the scenario. For things that don’t require root, like binding to a local port <1024, then you need root. Sometimes you can get away without it, but running as root is not as bad as people make it out to be. Just need to be careful on what commands you run as root and what software you choose to trust.

I wanted to mention it, I remember when I already posted the comment and then moved to something else and forgot completely. Everyone should check out the Pi-Hosted series from NovaspiritTech.

Can be done if your containers are privileged. Not recommended. And the really annoying way of doing it is mounting it manually in the started container’s rootfs (if you have a fs overlay and not something like dedicated LVM or zfs or btrfs volumes for each containers, in which case this won’t work).

The only software that I’d actually run docker for would be VaultWarden, if I’d use it. Anything else, I can get away with LXC containers. Lately I’ve been looking into NixOS and Nix microVMs with firecracker. Not for the faint of heart though.

1 Like

I’ll just leave this right here:

  • Should you run your docker commands as root (e.g. with sudo) or use a user that is part of the docker group.

You might prefer to add yourself to the docker group and not use sudo. It’s simple enough

sudo usermod -aG docker username

then log out of your shell and back in. It saves the frustration of forgetting to type sudo before the command. Since doing this, I have not needed to use root for anything that I personally have done in docker.

  • Is it better to have one big docker-compose.yml file, or have one for each application/stack?

I like to use a different docker-compose.yml file for each service/application. Unless you are trying to have all the applications under one docker network, there is no need to do it all in one. So that way you will have all the containers for one service per docker-compose file. Even if you want these services to be on one docker network, you might find it easier to keep the services segmented per docker-compose file and have them all join the same network. If you need to amend a docker compose file, you are only messing with one service instead of all.

  • Where should you save your docker-compose files? I have seen /opt/containers/, /opt/appdata, and putting them in your home directory. For some reason, I don’t seem to like the idea of using your home directory.

Whatever makes sense to you. I originally started with my home directory when I was first learning docker. I now prefer to create a docker directory at the root level, with a separate directory under that for each service. And inside the service directories I tend to create all the volumes for the containers. Now when I backup my /docker directory I have all my data and docker compose files in one shot.

  • What is the best practice for keeping your containers up to date and making sure security upgrades are applied right away. I have seen docker compose pull && docker compose up -d && docker system prune as well as “Watchtower” and “What’s Up Docker”. This is the part I am struggling with the most, as there does not seem to be something as clear as using a package manager and unattended-upgrades.

Depends on how important stability is to you. I have been using watchtower for a couple years now on one of my systems in my homelab and I have not had any issues with it. But if any of those services goes down, I will not be bothered, and it’ll give me a chance to troubleshoot. For any services that I want to keep a little more reliable I will manually do the update. Just like anything else, it depends on what you want.

I’m not sure if any of this qualifies as best practices, but it works for me.