Linux Servers: Which Distro should you use?!

Originally published at: Linux Servers: Which Distro should you use?! – LearnLinuxTV

In the Linux community, there’s constant debate about which distribution is best for your desktop. However, there’s not as much discussion regarding Linux distros for your server. There are many good options for your Linux server project, and in this video Jay discusses his top 6 choices.


Just in case my comment gets deleted on YouTube, I’ll re-post it here.

I’m using Debian as my hypervisor to run a couple of server VMs such as pfSense and storage server that runs Debian as well. I’m also running LXC containers as well (lxc-start, lxc-stop, lxc-attach, etc.). My containers run LDAP and I have a web server container that gets assigned multiple IP addresses (one IP address per virtual host), so I do not need multiple web servers.

At least my comment is still there. Not meant to cross-post.


I don’t want to start a thread war, but after the Redhat fiasco with CentOS, I will never again use a commercially controlled distro. That’s me, you use what you want, I won’t judge.

I moved all of my servers, VM’s (KVM), and RPI’s from CentOS to Debian-11 and could not be any happier.

I’m using Solus on my laptops and Garuda on my gaming computer.


Yeah, the CentOS thing was the last straw for me with RH.

If you want the latest and greatest packaging, Debian probably won’t get you there. However, if you want a secure, stable, well tested, and reliable server, you can’t really go wrong with Debian. I like Ubuntu servers also, but, I tend to stay one LTS behind. Currently, all our work servers are running OL7 with UEK.


I have autism, so if I’m not using Void Linux, I’m probably running Alpine. Void is the distro of choice for both desktop (my RPi 4, daily driver) and for servers. I currently only have a Samba VM and a separate Graphana+Prometheus VM, both on Void. My Pi also runs LXD containers from time to time (lxc commands).

I am running Alpine Linux on my RPi 3, which I’m using as a router (on the Ethernet port) and getting internet via WiFi. I also have OpenVPN running on the Pi 3, so all traffic that goes in through the eth port, gets tunneled out through my VPN server.

1 Like

I’m really looking forward to seeing more content about Suse on servers. I though YaST was a graphical interface for configuration or can it also be used on the server-side as a command line utility?

It’s a ncurses-type interface:


My thinking exactly. Debian works perfectly for me as a server OS.

Really happy to hear that.

I run Grafana on RPi as well.

I should look at running Prometheus in a container. That would be a good learning project for me.

1 Like

So that’s what this type of interface is called, today I learned :smiley:

1 Like

I couldn’t figure out how YaST works, at least the CLI version. I just default to using zypper and edit .conf files. I even had issues trying to do a simple update through the yast ncurse util. This doesn’t necessarily mean that yast is bad, just that I’m too stupid to use yast.

I’m running it from the void linux repo, installed it via the package manager. It should be available for Debian though and if not, it should be really easy to add the prometheus repo. As for the config, that too is really easy, you just add the hosts you want to monitor in the prometheus conf file in /etc, you install node-exporter (or node_exporter depending on how the package is named in the repo, in proxmox, so debian 11 it is called prometheus-node-exporter) on the hosts you want to monitor and it should be up and running. Then you just point grafana to the prometheus server (can be running on the same host, or another one, doesn’t matter, I have it running on the same VM) and import a few dashboards (I use “Prometheus Node Exporter Full” and “Node exporter server metrics”). And you should be all set, it’s not that hard.

I have it running in a VM, but I can move it to LXD at any time (and probably will at some point, I’ll run it on ARM when I will do an SBC cluster in my homelab). I prefer LXD because it’s the classic way of administering linux systems, which I am used to, while OCI containers (docker / podman / k8s etc.) are too new and different for me. That is not to say OCI containers are bad, just that I am used to LXD and I prefer it, as the resource differences is not that different.


I have a k8s cluster running on RPi, but I have not done much with it. I built it to learn kubernetes, but then my priorities changes. My security certs are expiring this year, so i’m back in study mode for a bit so that i can re-certify.

I’m already running Telegraf on all my nodes. Telegraf is pushing to Influx, which I have integrated with Grafana. i’ll look to see if Prometheus can read from Telegraf.


I really understand the frustration here for people who were counting on CentOS. However, I believe the situation with IBM and RedHat is very different from Canonical and Ubuntu, to take an example.

Yes, Canonical has made some questionable decisions in the past, but I’m fairly confident that Ubuntu will continue it’s release model for many years to come. To me, it seems their philosophy and way to approach Linux development is very different from IBM.

So I’ll happily continue to use Ubuntu, because I think it’s stable, user-friendly, logical and I like the release model. However, I totally understand if some people prefer more community-driven distributions.

Cheers! :blush::sunglasses:


Me and the other sysadmins affected by the CentOS fiasco had a specific requirement for RHEL compatibility. Our customers used RHEL, but we couldn’t afford, nor wanted to use RHEL for all our testing purposes, so CentOS it was. But the upgrade to 8, which was supposed to last until 2028 or something, then getting the support rug pulled out from underneath our feet was a dirty move.

I would assure you the outrage would not be that bad if only they left the support in place. I only have very few CentOS 8 production systems, so I immediately moved the to Oracle Linux 8 through the migration script and called it a day. If I was still working there, I’d use the rocky migration script to convert OEL to Rocky. Other people didn’t have it as easy though, I know someone who just finished upgrading a few hundred servers to CentOS 8 around a month before Red Hat announced they will drop support. From what I heard back then, using the script was not an option and that guy moved to OpenSUSE, but for people who need compatibility with RHEL because they need to use and test software that others are going to run on RHEL, the move to kill centos was devastating.

Things aren’t as bad now with Alma and Rocky, but back then, the only one we could depend on was Oracle, which was a fear inducing thought in itself. Oracle’s UEK is pretty nice though and their plan is mostly to get customers to move from RHEL to OEL and buy support. Unlike RHEL they don’t ask for registration or licensing or other silly stuff to run OEL, but it’s Oracle, you can never know.


Agreed. I do recognize the contributions that Canonical and Redhat have made. Their efforts have really helped push Linux/open-source growth tremendously.

However, it is a bit hard for me to describe the difference between a community-driven project vs a commercially-driven project. While both have their merits, they both also have different advantages and disadvantages. The best example I can think of is the Crowdsec project. I have not seen this type of sharing of information that this project adopted among commercially-driven security projects. I, and again, this is just my preference, love the for the community, by the community approach.

Yes, I agree the way Redhat handled pulling support early was poorly handled.

I ran CentOS 7 at home because I wanted to learn Redhat. I run Debian now. I have access to Redhat servers at work and learn what I need there.


Currently, I run a Debian 11 server (upgraded without issues from Debian 10) at my work. I have just two things running on it, a CRM and my SSG personal notes wiki.

At home, my son and I are running our home servers on Fedora. I love Fedora as my desktop, and Fedora Server is a great project too, but I am struggling to learn podman and containers. I’ve taken a break from that as I learn SQL and administering mariadb so that I can do more with the opensource CRM that I’m running at the office. I’m not a programmer, but I’m having fun learning SQL and databases.

Super interesting to hear about people running LXD containers, and not on Ubuntu which surprised me. I have thought about switching over to learn LXD as my container technology, because I’m not finding a lot of documentation on podman, rootless containers, and interfacing with SELinux written for a beginner like me. If anyone has good documentation to recommend for LXD, I’d be interested.

1 Like

Podman is still a bit of a new territory, so it’s a bit of a stiff learning curve wiith not much documentation. I think microk8s or k3s on a single node would be easier. Portainer + microk8s is a good combination I hear, but I never used either, so I can’t confirm. On microk8s it’s as easy as a “microk8s enable portainer.” But I am not sure where microk8s is available outside of Ubuntu and snaps.

Here’s the documentation for LXD
Jump to “lxd init” and after you finish, straight to launching containers / images.

Alpine Linux has lxd in its community repo, Fedora has a copr repo for it, Arch surprisingly has it in the main repo and not in the AUR (I guess there are the -git / latest versions in the AUR) and Gentoo can compile it with emerge. Void also has it in its main repo.

Since you prefer Fedora, I suggest you add the copr repo, it should be fine. I searched the debian repos (on proxmox) and it looks like lxd is not available as a deb, only as a snap, which is unfortunate. Maybe there is custom repo, but I slightly doubt it.

After you launch a LXD container, all you have to do is exec into its bash and it’s just as easy as using a normal distro, which is why I like it (it’s more like chroot on steroids with its own init and process supervisor, a.k.a. service manager, systemd doesn’t work in chroot, but works in lxc / lxd containers - well, other inits and supervisors too).

After you learn how to launch, start, stop and delete lxd containers, I suggest you go to the profile part, so you can modify instance properties (max RAM, number of CPU threads, network interface etc.). I especially encourage people to learn to do bridge networks on their linux hosts and bridging the lxd bridge to the local LAN, so you can access services in LXD from other clients on the LAN (by default LXD is behind a NAT, just like Docker, but it doesn’t need to be).

And that’s about it, there isn’t much to using LXD in the first place… it’s a tool that gets out of your way and allows you to configure things like you would any other linux server.

1 Like

What’s refreshing to me, and probably those that remember the days when it was MS servers or nothing, is that fact we’re talking about by choice, not relegated to a specific mantra. No matter which major distro line you pick, it’s going to make a quality server if you manage it properly.

For me, system administration comes down to the package manager. I don’t know what it is, and can’t put my finger on it, but, I’ve always liked Debian / Ubuntu package management offerings better than the rpm based distro’s. Having said that, I also like Arch and pacman, though I’ve never run Arch in production. Pacman seems very logical, and easy to use. The packaging mechanics for pacman is really easy to understand as well.


As a complete beginner running into issues as I go by, learning as much as I can, I really wish you could do exactly that. I find that a lot of the conversation about distributions revolves around this particular topic.

I don’t have a ton of experience, I have a couple of VPS running Ubuntu and CentOS (which I recently switched to Ubuntu but I intend to switch to either Alma or Rocky). To me, it seems that the package managers worked well in either distribution and there were no major issues. Same thing when running Docker, I tend to pick those built on Debian which seem to be very common. Installing, uninstalling, clearing cache, seem rather simple

1 Like

Most discussions go around the package manager, because that’s what most people interact with, but they do have other differences, like how they set up the network (NetworkManager, netplan, /etc/network/interfaces, ConnMan, straight up ip of ifconfig commands at boot etc. Then, in the system administrators world, you get other things like the process supervisor, a.k.a. service manager, like systemd, openrc, runit and s6. Most distros use systemd, some offer you choice, some use only one. But if you go with something like Alpine because it’s lightweight and secure, you need to consider that it is running openrc, so you have to learn how to use that instead of systemd unit files, if you are used to those (especially if you only used Ubuntu or newer CentOS).

Somewhat related to the package manager, the things that you can find in a distro’s repo may also come into discussion. CentOS has EPEL, which has a lot of server stuff, Ubuntu has PPAs, Debian and CentOS have the most repos provided by software developers (like GitLab for example, you can add the repo and be good to go, you can’t do that in Arch, but Arch has the AUR which may have gitlab there). If you need support, Ubuntu, SUSE, RHEL and OEL have it (and I work in a place that has big customers who work directly with the Linux vendors to squash down bugs that affect their production servers).

Then again, as a big company, you may want to base your infrastructure around what the industry has to offer. So, say that the software availability is not a concern, it’s just linux, someone can make it work on your servers. You may want to hold off from running Gentoo on all your servers. Performance would be great and compiling stuff once and sharing with the rest of the servers with the same configuration would be awesome, but can you find sysadmins that know how to use Gentoo? Most business go with CentOS (now Rocky and Alma) or Ubuntu or Debian, because of the big pool of users who know how to use those and can adapt from one to another, but if you do your custom linux distro or use something like gentoo, you have to take that into consideration.

Other differences come in the security part, like AppArmor vs SELinux. You can run one or the other on whatever distro you want, but then you’d have to debug and add exceptions by yourself to everything, which is a pain, so most people go with the default if they have a need for it.

So, there are subtle differences between distros and people need to consider those before they pick their poison.