Accidental HomeLab - July 30, 2021 - Putting it back together

Another exciting week here in the HomeLab. Thanks to everyone for all your help and suggestions.

My new pfSense router arrived. I started bringing my various machines and services back up via ansible. Thanks to @KI7MT and @Mr_McBride I figured out how to save specific files between rebuilds. In this iteration, I am assigning everything a static IP address in pfSense. I don’t know if it is a best practice. It wouldn’t seem to scale well. But, for now, it provides me the least surprises. My IP list is my central point of truth for the network.

Something interesting is that pfSense shows almost no consistency in my machine hostnames. I set the router domain to ‘acme.’ The 192.168.10.0/24 subnetwork is ‘lan.acme.’ The 192.168.10.0/24 subnetwork is ‘lab.acme’ Each machine is assigned via the software installation process. pfSense reports everything from ‘nohost name’ to pi-hole’ to ‘syno.localhost’ to ‘proxmox.lan.acme’ I’ll have to sort that out somehow. I find it weird that it just doesn’t work out of the box. That is a puzzle for another day.

My current project is to bring up proxmox.lab.acme, my test proxmox server completely from ansible and the command line. It seems like a pretty good way to understand each step along the way to automate my proxmox management. That was the reason I started this whole project. I wanted to be able to bring up test virtual machines and single-board computers for testing. Proxmox VE Admin Guide for 7.x is a great resource. I still am using the GUI a lot to see if what I thought would happen did happen!

Manually configuring proxmox lead me down the path that is logical volume manager. I have been using lvm for years and never had the slightest idea how it worked.

I am curious if anyone thinks this dive into lvm is worth it, or should I install proxmox on ZFS to invest my time becoming familiar with ZFS. My test machine is an old laptop with 8MB of memory and a 512GB SSD. There is no technical reason to use ZFS. But I did a short test, and it seem to work.

The adage that ZFS ‘needs’ a lot of memory seems to be a bit off point. ZFS seems to ‘like’ a lot of memory for caching, but it gives the memory back to the host if a new virtual machine asks for it. This might cause the gray-beards to break out in hives, but it seems to work for testing.

1 Like

LVM vs BTRFS vs ZFS is almost a religious discussion/argument.

I’m using LVM myself on metal, but my use case is so that I can easily add HDDs and expand my /opt mount. Now, I’m thinking about switching my back up methodology to start using snapshots that my methodology can join the 21 century. That means that I will probably chose to learn BTRFS or ZFS.

TL:DR BTRFS and ZFS, from what little I know about both of those, already have a some/all of the features of LVM.