Linux Servers: Which Distro should you use?!

@Biky thanks for the amazing response! I think I’m going to give LXD another try after I spend a few more weeks learning SQL and mariadb configuration. You gave some great advice on learning LXD. My ultimate goal in learning a container tech, is to move my CRM that is running at the office into a container so that if my budget server (an old laptop running Debian 11) breaks down, I can be up and running on a new “budget server” without having to do a full install and configuration of the CRM. There are some great nuggets of information in this thread.

2 Likes

@KI7MT had some great thoughts about choice. @jay and this community has really encouraged my exploration of home lab for fun and for adding tools at work, none that are mission critical, but nice additions. “Choice” has me doing the following:

  • A RaspberryPi 1B running Raspberry Pi OS as what I think might be a “Bastion Server” that runs pi-vpn with Wireguard to give me access to my work network from my home network. It also is my “Private Git Server” for my dotfile manager for all of my home computers and work computers.
  • An old laptop running Debian 11 with a CRM and my SSG personal wiki.
  • A file server, built from recycled desktop parts from 4 or 5 machines, running Fedora 35, which is automatically turned on for file backups by our main home server and then shutdown after the backup is complete.
  • An old Dell Optiplex, our main home server, that acts as my son’s server playground running Fedora 35 and also has our “source of truth” copy for all our folders that are synced with Syncthing.

This community has helped me set all of these up, and I appreciate what Jay and others have enabled my son and I to learn and do. Thank You!

3 Likes

I’m assuming you have backups. Just copy your CRM binaries and conf files and do a mysqldump to a file and compress that file from time to time. When you are ready to move it, just shutdown your CRM and DB, create 2 LXD containers, copy the CRM files to one of them, the dump file to another, where you also install mariadb server. Configure the CRM to point to the new IP of the db instance and you’re set. Start the DB, start the CRM, then continue doing your thing. If the laptop breaks down, you can quickly create the containers to another host and you know the workflow and you can recover from backups.

Do not depend on mysql to work when the laptop crashes, always have daily (or 4 times a day if the db is small) dumps backed up to another computer on your network at the very least.

The other option would be to buy a second host and run LXD in HA mode, but that’s a bit complicated (I don’t know the process to do that yet, it involves CRIU), but you would be investing in new hardware (or use already existing hardware on your network), but depending on the size of your company, it may not be worth it and may be better to wait until the laptop dies and then buy a NUC or something. You could then boot the OS on another host from your laptop, offline migrate the containers and maybe be set, or likely have to do a db restore from the last backup, and then continue on with business as usual.

2 Likes

Yes, great advice. I have backups on another machine at home, so it is off site as well. But the workflow that you mention with containers is a goal to learn towards. Thanks. Business is small and this CRM is not mission critical, just a tool that we are just starting to use.

3 Likes

Not sure why that would surprise you. You should be able to host containers from any Linux distro. Many learned about containers from Docker. There are many graphic UI’s that can be used to manage containers. For example, Redhat’s implementation with OpenShift, which has been around for quite a while. Portainer, which was already mentioned, is a excellent tool.

I remember the days before it was M$ or nothing.

@mowest a raspberry pi is a great way to learn containers. I currently have several PI-4’s running k8s. I built the cluster just to learn kubernetes and I am still in the beginning stages, so I don’t have any containers spun up in it yet. But, I have run Docker on other PI’s to host individual open-source projects that I was interested in. For example, I have run Nagios Enterprise and Zabbix (both monitoring applications) in containers on PI-4’s.

Good luck with your journey.

4 Likes

IDK about CRM, I think it’s web app + some middleware + database layer? So maybe a lot like Gitlab? I bet they have a Docker container for it, then, like how Gitlab does? We use Giltab’s Docker container on one of our Synology NAS and it works really well, plus is super easy to configure, update, and back up. (Super easy as in even I can do it. :smile_cat: )

I think this is one of the nicest parts about having a separate “lab” environment (home-lab or work-lab) is being able to try things out and test them before committing to them.

1 Like

Worth pointing out for people who don’t know that linux containers (LXC, LXD) and OCI Containers (Docker, Podman, K8s, k3s, k0s, microk8s etc.) are not the same. They have similar roots, in that docker was born from lxc, but it has since moved on from there.

LXC is basically the future of Linux VMs (with LXD being an orchestrator for it, making life easier), while OCI containers (Open Cloud Initiative) are a completely different beast. The goal of the first is to make Linux virtualization less resource intensive and it works by running any linux distro on top of the host distro’s kernel. This has the advantage that updates to the guest container OS will likely not require a reboot, because most of it is basically in the user space and not the kernel (because the kernel is running on the host), however program restarts are needed when you update whatever you are running in LXC.

OCI containers on the other hand are a way to run software by defining an end-goal. You get a container template, like say, nginx web server, you define in the compose file how many instances of it you want, or if servers become overloaded, auto-scaling etc. and the container orchestrator does everything for you. If one host goes down and takes some containers with it, the orchestrator will launch new instances on other hosts automatically.

Each container technology serves a purpose. LXC is the classic linux way of managing servers and services, OCI containers are a new way of managing applications instead of OSes and software. They are not exactly comparable. You may compare them in a sense, because you can run nginx either as the classic, say, rpm package on a CentOS LXC container running on Debian, but also as a nginx docker container. nginx itself is the same software, just packaged differently.

I have seen cases where combining them gives you a big advantage. On big servers, you can run potentially thousands of containers. But Kubernetes has a limit of around 200 containers (actually 110, but it can be increased to around 200-250) on one worker node. So I’ve seen cases where people ran a bunch of LXC containers, inside which they installed k8s and set them as worker nodes. They got around the k8s limitation by adding more worker nodes.

And I keep telling people, if they want to just learn Kubernetes, don’t buy a bunch of single board computers, buy just one, install LXD, create 4 to 12 LXC containers, install k3s or k8s or whatever you prefer inside the linux containers and then start learning. Buying many SBCs is a bit excessive if you aren’t going to actually use them.

If you’re going with k3s, make 1 master node and 3 worker nodes, if you’re going with k8s, do 3 masters and 6 to 9 workers. microk8s can be ran as both master and worker, so you can use just 1, but that kinda defeats the purpose, so do the same as for k3s.

If the resources will start getting limited on a single SBC, only then you should buy another one, install LXD and move some of the containers on the new one. You don’t even need to recreate the containers, you just migrate them offline (or online with CRIU, but that’s another story).

We also used gitlab, but we added the centos 7 repo and installed it as an rpm package. All we needed to do to update it was

yum check-update
yum update -y
gitlabctl reconfigure

And we were up to the races. For backups, we used the gitlab backup utility daily, the archive file being automatically saved to a nfs server. I would say there are very few programs that require a lot of manual intervention to update and that happens mostly when there is a major version change.

2 Likes

Yeah, Synology has a nice scheduler that we set to run the Gitlab backup. Then we have HyperBackup jobs that back the volumes of the working NASes to both our central local backup NAS and to Synology’s C2. All nice and locally-encrypted.

1 Like

Have you tried doing a disaster recovery on gitlab? I suggest you do. I’m pretty sure just backing up the volumes that gitlab is running on will not be enough, because dbs are weird like that.
Never mind, I didn’t read that correctly, you are running the Gitlab backup through the scheduler. That should be ok.

1 Like

One thing I’ve learned over the years is, Backups are great, but a restoration process (DR) is a must have thing, otherwise, you jump through hoops trying to get things running properly again. If you can leverage / take advantage of ZFS snapshots, your life is “much” easier in a DR scenario.

2 Likes

Also, recently ZFS added encryption, so you don’t have to rely on crypt at the drive layer any more.

So:

# Simple backup process

cd ~/.snapshot
mkdir some-name-$(date -I)

# That's it, you're done.

EDIT: Those snapshots can also be pushed off to Object Storage in some cloud facility. If your servers are already in the cloud, push them to whatever Tenancy / Compartment / ObjectStorage bucket you have setup. Just make sure you have your retention policies set to whatever threshold you need.

1 Like

What surprised me is not that people in this post were running containers, but that they were choosing to run LXD on other distros besides Ubuntu. LXD is opensource, but heavily developed by Canonical developers. In the same way you don’t find as much information about users running Podman (opensource, but heavily developed by RedHat developers) on Ubuntu or other distros outside of the RPM family, even though you can. I was surprised that both of the people who mentioned running LXD were not running it on Ubuntu based distros, which of course you can just like you can run Podman on non RPM based distros too.

When I dive back into learning containers, I’m planning to learn LXD running on Debian 11 installed through snapd (another Canonical developed technology), and I’m sure it will be a great experience, and I’m excited to use it on my Debian 11 server.

Really, @KI7MT pointed out all of these “options” are wonderful. We have so many “choices” of ways to run and configure our servers using LXD, snapd, Podman, AppArmor, SELinux, and others. It is exciting.

Thanks @Mr_McBride for the idea of using containers on a raspberry pi, unfortunately, I only have two RPi 1B’s to use which are armv6, so I think that I would be limited both by the hardware on those first generation pi’s and by the architecture to find containers that will run on those. However, I have lots and lots of recycled x86_64 hardware with at least 4Gigs of RAM that I can spin up as servers which would be plenty powerful to use to learn containers tech on.

CRM = Customer Relationship Management web app. So yes, the one that I’m using is running mariadb with PHP code. They are not super popular and they haven’t developed a docker image or even a docker compose file. But this provides me with a learning opportunity to dig into LXD and create containers for the app that I can use to make regular backups and test restoring of those backups.

Thanks for all of the server talk in this thread, I’m taking notes and plan to apply it to my home lab / work lab in future months.

2 Likes

On the LXD / Docker front: Its good we’re having this conversation. Docker is putting a lot of pressure on commercial entities. This is not a major issue for smaller company’s, but for those with 1000’s of developers / users, it’s becoming cost prohibitive to use Docker for service deployments.

While I am certainly an advocate for open source development, most of us still need to work for some sort of commercial entity (full, part-time, or contract). If they are paying more for their infrastructure, that means less developers, and less $$ in our offers. Just Say’n.

1 Like

Why? Isn’t LXD open-source? Why shouldn’t we be able run it on our preferred (or company supported) distro?

Is Canonical doing something to limit it’s use to their distro only?

1 Like

It’s probably the classic misconception that everything Canonical does is bad and their software ends up only running on Ubuntu. Like it happened with Unity (2D, 3D, 7, 8), Mir, etc. Actually, it’s impressive that snapd runs on other distros than Ubuntu, but its dependency on systemd makes it unsuited to be a universal package manager.

LXD had a pretty rough start. Most big providers went with OCI containers, as opposed to LXC (OpenShift and Podman for RHEL, Rancher and k3s on SUSE). And most distros who did support LXC only support it via the LXC tool set, not the LXD orchestrator. Proxmox uses its own toolkit to control LXC- commands, most other distros also use normal LXC- commands. OpenNebula had support for LXD at some point, but support was removed in favor of LXC. The only platform that supports LXD is OpenStack.

But LXD itself is FOSS and it is available on other distros, despite its slight unpopularity. Alpine, Gentoo and Void have it in their main or community repo, while Fedora, RHEL and I believe Debian have it available either via snapd (obviously) or via 3rd party repos (copr or deb repo). Not sure what’s its status on SUSE, but it’s likely not available there by default either.

Still, I find LXD a better tool to control LXC. Both LXC and LXD are mostly developed by Canonical, but thankfully the technology isn’t dependent on something Ubuntu specific, as all the necessary features that enable it are built into the mainline kernel.

1 Like

My posts must have been unclear and confusing. I’m sorry about that. When I have searched for information about LXD in the past, the information I found talked about LXD being run on top of Ubuntu. I was simply trying to express my excitement to see members of this community using LXD on other distros and I was excited to also hear that other distros have made that easy by including it in their repos.

Canonical has been a great opensource citizen in releasing LXD as opensource, and continuing to support the development of this great tool.

I spend most of my time in Fedora and Debian, and I haven’t seen LXD mentioned in Fedora and Debian communities, as I search for tutorials and documentation. You are correct though, there is nothing stopping someone from running LXD on both of those distros, and I’m planning to do that. I didn’t mean to imply that there was anything wrong with running LXD on other distros. Just the opposite, I’m thankful that this opensource tool is available on so many distro platforms.

I meant to be positive and give thanks for this, but I apologize if my posts came across with a different tone.

2 Likes

To your point, Ubuntu is very popular.

Aside from what’s already been mentioned above by @Biky: Compliance, Audits, Security, Licensing, etc. macOS and Bash is a classic example.