My experiences with switching to Linux

First time poster. I joined the forums about a year ago, and just recently got around to actually browsing some of the topics here. Lots of good information and knowledgeable people on these forums. I’ve also been watching and learning from Jay’s videos.
I’ve been a computer enthusiast since my first into to computers in the mid '70s (kinda puts a date on me, lol). I downloaded my first Linux distro in the late '90s, I don’t remember which distro it was, but I had no idea what to do with it, and even less resources to learn from.
Several years ago, after a couple of major data losses, I started researching backup solutions. I discovered homelabs and servers, and fell way, way down that rabbit hole. So, today I’m the proud owner of several high powered servers, a couple amazing Raspberry Pi4s, and more Linux VMs than I can shake a stick at.
Getting to the topic at hand. Part of my strategy with trying out various distros, included removing my Windows drive from my laptop and swapping in an inexpensive ssd (240 GB $35 CAN, $27 US) and installing whatever Linux flavor I wanted, then swapping back to the Win ssd, if I needed to.
Another thing I’ve done is installed VirtualBox on my laptop, and currently have Ubuntu Desktop 22.04, Debian 11, Kali Linux, Linux Mint, and Tails VMs installed. On the servers, I have a dedicated Supermicro pfSense box, a Supermicro TrueNAS server, a Dell 510 Proxmox machine, a 48 thread Dell 720xd running TrueNAS Scale, and an Intel server with 32 threads running VMWare ESXI (this one was given to me).
In the interest of power savings, I’ve recently switched most of my services over to the PIs, as my “always on” servers. Following Novaspirittech’s pi-hosted series on YT, I ended up with 25 docker containers running on one 8GB Pi4, and a dozen on the other, including a complete ARR stack, Jellyfin, pi-hole (of course), apt-cacher-ng, Wireguard, along with a separate Motioneye install for security cameras. I have written a bash script recently to backup and prune my security camera videos, and rsync them nightly, with a cron job to the other Pi. Thanks to Jay’s videos, it works like a charm.
I have a couple issues with the Pis, but I’ll start a topic in the Raspberry Pi section for that.

3 Likes

Sounds like you’ve got an incredible homelab. Welcome to the forum. Now I’ve got to look up what apt-cacher-ng is.

I would have to agree with your comment “fell way, way down that rabbit hole”. But in a good way!
I too ditched Windows this past January after decades running Windows (since version 1.0). I have four Linux Mint workstations, two servers and currently building a third with a Supermicro X10 MB. My Linux journey has been like drinking from a fire hose as it has been a total immersion.
The good news is that I have never looked back and have no plans to ever go back to Windows. Linux is feature rich and has a ton of knowledgeable people willing to share their time and experiences like Jay.

Curious as to what Supermicro hardware you landed on.

Both Supermicro servers came with the X10SLL-F motherboards. The 2U 8 bay TrueNAS machine had an E3-1220v3 4c4t CPU and the 1U pfSense box had an E3-1230v3 4c8t CPU. I swapped the CPUs between the 2 machines. I just ordered 32 GB of ram for the TrueNAS server.

Apt-cacher-ng works with any distro that utilizes the apt package manager. You set it up on one install of Ubuntu, Debian, or any other distro which uses the apt command, and create an apt config file on the other installs. When any of the installs are updated, they first look to the main install to see if the packages they need are cached. and if they are, they get the packages locally, if not they go to the regular repositories. This works especially well for me, as I have crappy internet service, as well as a cap on my high speed data plan.

Sometimes I have as many as 2 dozen VMs and other installs that use apt to update, and this saves me a ton on downloads, as well as saving a lot of time because the installs get the packages locally.

Here’s the YT video I followed to set it up initially. Apt Package Caching using apt-cacher-ng on a Raspberry Pi - YouTube

There is also a docker image for apt-cacher-ng here: Docker

Thanks for the additional information OldGoat! Very helpful!

I wonder what people who have so many computers (real or virtual) do with them. I have 1 laptop which I use daily and I can do everything with it what I want. I don’t need a complete computerfarm to keep the room on temperature, with me in the room it is warm enough.

Because having a disposable or non-production system to play with is fun! It also allows you to test new software or configurations without fear of breaking it. For example you may test how BTRFS works on a Debian system without actually running it on your main machine. Perhaps you have your main machine configured for gaming, but then have another one as a media server.

I have a (low-power) NAS, which serves all my data in my network, a backup server (which hasn’t gone live yet, I just need hdds), a container arm sbc (which is idle as well), my main pc (which is just an x86 sbc with so-dimm ram), my router (which is also an sbc) and a switch. I also have a threadripper hypervisor, but that thing isn’t on more than 3 days a week for less than 5 hours each day.

If you get into low-power stuff, you’ll not worry about heat (and I live in a small room where I built a loft bed, because I needed to expand vertically. And I’m planning for another (low power) build to go along the TR (powered from time to time), planning to transform my main pc into another (low-power) hypervisor and just use an old x86 tablet as my daily driver instead

Except for the threadripper, I plan to power everything off of a 12v power system (since I’m planning a solar build). 12v is just much more efficient than converting to AC. Everything is ready for the move when I’m going to do it. Main pc is currently powered by a usb-c PD 20v laptop brick to a 5.5x2.1mm barrel jack adapter, similar for my NAS but instead I have 2 step-down buck converters from 19v to 5 and 12v. Other SBCs already run with their bricks on 12v and 15v for the backup server, which can all be powered by USB PD with a barrel jack adapter to their respective voltage - I already have in plan to order more cigarette lighter USB-C PD plugs.

I don’t get how people can afford to run ancient high-power consuming servers. I had 2 low-power builds at some point and they were barely sipping power, then I got an old xeon x3450 build for free and my power bill more than doubled with that thing. I’m glad I’m not running it anymore.

As to what people run on multiple devices… it depends. For me, it’s mostly my NAS NFS server to serve data to multiple devices in my network and some VMs here and there to test stuff. Like hypoiodous said, I wouldn’t have tried NixOS on my main PC directly without first trying it under a VM. Once I got it set up properly, I slapped it on 3 SBCs (including my backup server that’s yet to run any backups). And I’m planning for a lot more in-house useful services. The most important thing is my VPN, which I use to access stuff on my network over the internet through my phone (like my photo collection) when needed, without to depend on “the cloud.”

I do that simply with my laptop. First I create a Timeshift backup of the system and then a backup of my data through rsync to my NAS.
Nothing can happen and I can play as much as I like.
But I get your point.