Proxmox with non-ECC memory. ZFS? BTRFS? EXT4?

I am going to be setting up another Proxmox node on a device that doesn’t have the option of using ECC memory (HP Elitedesk 805 G6 with a Ryzen processor). I have a few questions:

  1. Without ECC memory and I better off going with ZFS or some other file system for the installation (ext4? BTRFS?) and why?

  2. If I install with a different file system, what Proxmox-specific features and functionality do I give up/lose in a single node environment? This will not be in a cluster, and I will be running some plain vanilla VMs (Nextcloud, Wordpress, Docker, etc.) and storing data/doing VM backups to a separate Synology NAS.

Thanks in advance

The only advantage ECC offers to ZFS is the same as any other FS. You get your RAM corrected for errors, so in theory, you won’t be writing junk data to your pool. ZFS, if you have a redundant pool, will scrub the pool for errors on the fly, eliminating the need to run fsck on bootup. I’ve been running ZFS almost 24/7 on an ARM SBC for 3/4 of a year I think. No issues so far.

With proxmox, you normally go with either zfs or lvm. Both of them take advantage of thin provisioning by default and allow you to take snapshots. Qcow2 on a normal FS like ext4 also allows you to take snapshots and it’s what I used to use on a cluster via NFS, but I don’t think it’s much of a performance difference.

I’d still say to go with ZFS because of the scrubbing and the data integrity checks in the FS. If you don’t have 2 drives and are just going to use one disk for the pool, go with LVM, because ZFS is only good in a redundant configuration (mirror / stripped-mirror, RAID-z1 / z2 / z3).

I run Proxmox on another server with ECC memory and I use ZFS in a mirror array (two actually, once for the boot drive and a separate 2 disk NVMe array for VM storage).

The new machine will have a mirror as well, just no ECC ram. I am wondering if I might be better off using BTRFS or EXT4 in that machine instead of ZFS. I do take nightly backups of everything and store two copies locally and one on Amazon Glacier.

If you already use ZFS, then you’d be losing an important feature if you ever want to migrate or clone VMs: zfs-send. I’d suggest sticking with ZFS. I would highly suggest not using btrfs, unless your OS makes use of its features (like opensuse does with transactional server setup using btrfs snapshots to revert updates). For proxmox, ZFS is probably the best.

I kind of like BTRFS. I am running an OMV instance built on a mdadm raid 1 array, formatted with BTRFS. Since I run a single node and not a cluster, I wonder if I would really lose any functionality using BTRFS instead.

Why are you using md raid, when btrfs has its own raid / volume manager?

mkfs.btrfs -d raid1 /dev/sdd /dev/sde

Note that the above will format the disks and remove all the data on it. This is just an example of how easy it is to setup. No need for mdraid. You probably aren’t even getting some of the btrfs advantages by using md, which is a similar scrub and data validation checksums on the file system when using a redundant file system. Because btrfs is only on one device (single mode), you lose a lot of its features. This would be similar to using ZFS with a RAID controller (which is highly unrecommended, not sure about btrfs and md).

Unlike ZFS scrub, the BTRFS scrub is not as advanced, it’s not a replacement for fsck, it just does a file checksum. ZFS scrub checks not just the file checksums, but also the file system integrity. ZFS is way more advanced than btrfs.
https://btrfs.readthedocs.io/en/latest/Scrub.html
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-scrub.8.html

Mostly as an experiment. I was trying to duplicate the Synology system.