In order to store more virtual machines, I decided I need more space, so I am planning on getting a 1TB NVME drive and an NVME adapter for migration purposes. My storage name is VMContainers for both VM and LXC. I should mention that I am well-versed in using Linux (CompTIA A+, Network+, Security+, CySA+ (passed this Tuesday), and Cisco CCNA), so I consider myself an advanced Linux user. Plus, I’m used to creating and resizing LVM partitions, so I have that covered.
If I am correct, I cannot rename the storage from VMContainer to VMContainer-tmp. so will assigning a new storage to 1TB NVME over USB with the same name be a problem? I want to create LVM storage for 1TB NVME and not LVM Thin, as I cannot create snapshots using LVM Thin and I did not know that until I learned that a few months ago.
Put it simply, my process is as follows:
Insert a 1TB NVME drive into a USB-C NVME enclosure.
Connect the 1TB NVME drive to the server using a USB-C cable.
Provision a 1TB NVME drive as an LVM storage in Proxmox.
Turn off all VMs and containers.
Migrate the VMs and containers over to a 1TB NVME drive that is in an enclosure.
Shut down the server and unplug from the PSU.
Remove the 500GB NVME drive from the motherboard’s M.2 slot.
Take off the 1TB drive from the enclosure and install the new drive in the motherboard’s M.2 slot.
Plug in the PSU and turn the server back on.
From a live USB image such as Ubuntu or Pop!_OS, mount the root drive and modify the /etc/fstab file so that Linux can read the new LVM partition.
Make sure that the VMs and containers are in place. If all goes well, the drive replacement is a success.
Now I have never tried migrating an LVM partition from one drive to another, but in my case, my 500GB NVME drive has an LVM Thin storage and not LVM. Am I making this so complicated when it comes to storage replacement? Can this really be done?
I suppose --100 and all the rest are logical volumes, so you can just rename the volume group, but that shouldn’t matter. Make sure none of the LVMs are mounted or in-use. Then you can create a new VG on the 1TB SSD.
If I were you, I’d buy a 2TB NVME drive. Well, I just bought last weekend 2x 2TB SATA SSDs for my Odroid HC4 as a temporary stop-gap, to do some file backups from a ZFS pool, to a temporary BTRFS pool, which I’ll destroy once I get ZFS to work on aarch64, but that’s another story.
Speaking of which, is there any reason why you are not using BTRFS or ZFS?
In any case, not sure if Proxmox allows you to rename the VG, so you would have to transfer the LVMs manually to the other drive. There are ways to do it with LVM tools alone, with things like vgsplit, vgmerge and pvmove, but I have no idea how these work.
Easiest thing to do is to create a new physical volume on the new nvme drive, create the VG VMContainer-vm, create new logical volumes with the same name as the others and the same size, then basically dd if=/dev/VGContainer-tmp/vm--100--disk--0 of=/dev/VGContainer/vm--100--disk--0. A pain in the butt, so you should write a script to do it for each VM disk.
IMO, it would be way easier to make a ZFS pool named VMContainer and just migrate from LVM to ZFS via the Proxmox GUI. WAY easier.
2TB is $100 extra and does not save me some cost per gigabyte unlike going from 500GB to 1TB, but that would give me more room for virtual homelab setups. Proxmox does not seem to allow the dynamic allocation of space for VMs and Linux containers unlike the barebones LXC and KVM with Virt-Manager that I used to use without setting up Proxmox, but since I have a Mac Mini and Linux desktop, I think it would be good to migrate over to Proxmox so I can access virtual machine console from the web interface instead of using virt-manager.
I’ve replaced a drive on my ubuntu server by adding the new drive as a new PV using the LVM tools. I moved the extents from the original physical volume to the new drive. Then, after removing the old PV from the VG I could safely remove the old drive. I basically followed the steps laid out here.
It worked flawlessly on ubuntu server, but that was a full LVM. I expect a thin LVM would work the same. As long as you have a good backup, you really don’t have a lot to lose. Since you are going to be bringing it down to swap the drive location a simple vm restore would be annoying, but easy to do on a fresh drive, if it doesn’t work out.
I’m going from LVM Thin in a 500GB drive to LVM in a new 1TB (or 2TB) drive with a BTRFS file system (or maybe ZFS that is listed in Proxmox as storage). Thank you for providing a link for replacing the drive.
I’m thinking that “VMContainer-vm–100–disk–0” and “VMContainer-vm–101–disk–0” inside /dev/mapper is its own volume group as the 100 and 101 has its own disk for a virtual machine. I’m going to place an order for a 2TB NVME drive. I don’t think 100, 101, 102, etc. are logical volumes because they are part of the LVM Thin pool. And that’s why I am trying to migrate to a file system that allows me the ability to create snapshots in a virtual machine.
Update: I placed an order for an SK Hynix 2TB NVME drive along with a USB-C NVME enclosure. Lots and lots of drive space to play with for spinning up more virtual machines and Linux containers. Maybe one day I could learn how to setup a k8 (Kubernetes) cluster and become CompTIA Linux+ certified!
(I am recently CompTIA CySA+ certified as of August 23, 2022 and I have renewed three of my CompTIA certifications: A+, Network+, and Security+.)
Congrats on the certifications! I forgot so mention that when I saw your CySA+ was recent.
So question? How have you set up your nvme0n1 to have each vm set up as a separate VG? I was never given an option like that. I’m using proxmox 7.2 if that makes any difference. Either way, you should still be able to move each lvm to the new drive, just means you’ll have to repeat the process. I expect it should be a fairly smooth process. ( I apologize, because right before saying that, or “this should take 5 minutes” usually is the beginning of a disaster. I hope that’s just for me though.)
Why would you want to use LVM snapshots? The snapshot system of proxmox itself is pretty good. At least in my limited experience. I have almost all of my virtual disks on an ext4 partition and the snapshots have worked like a charm. No LVM’s required.
Proxmox just uses whatever snapshot engine is behind the storage. If LVM/Thin, then LVM snapshots. If ZFS, ZFS snapshots. If BTRFS, that. If qcow2, then that. If you use NFS and don’t use qcow2 for the vdisks (i.e. you just use raw or for containers just directories), then you are limited to taking snapshots on your NAS. Well, if you run LVM, ZFS or BTRFS on your NAS, you can totally take the snapshots there.
Anyway, I think I will use gdisk to create an EXT4 partition for the storage of containers and virtual machines. I would rather configure LVM myself instead of having Proxmox creating logical volumes per virtual machines and containers.
And no, I’m not using NFS. I only have a single server with two drives (one for OS and ISO files, one for VMs and containers). So yeah, I feel like I’m new to Ceph and ZFS and all that stuff.
This is what I get when I went to a snapshot section for either a virtual machine or Linux container:
The current guest configuration does not support taking new snapshots
And this is what’s causing me confusion. Proxmox does have a lot of tools such as firewalls and networking, including a feature for telling me that I need to subscribe in order to receive stable updates, but then I was so used to the command line such as pvcreate, vgcreate, lvs, vgs, and so many commands and Proxmox gave me the feeling that it abstracts a lot of commands from the web interface. I was used to virt-manager and plain LXC commands such as lxc-create, lxc-fs, and lxc-attach before I switch to Proxmox. I was also used to netplan.io as well.
However, Proxmox does have a nice networking interface for network configuration; however, it does restrict me to using a convention for bridge such as vmbr0 instead of ethbr0 for an Ethernet interface (OPNsense uses the interface for WAN). I like to follow my own convention so I can identify which bridge assigns to which interface. If no bridge is assigned to any physical interface, then I use vmbr0 as a convention for spinning up virtual machine networks. That is something I miss before I switch to Proxmox.
Thanks for this reply. I suppose that makes sense. All of my VM’s are using the qcow2 virtual disks, so I just assumed the snapshot system was a proxmox specific tool. I haven’t really done much digging into KVM tools.
Let me know how this works out. I had to initialize my second disk with the web GUI in order for my proxmox install to recognize and use it. I didn’t dig to see if there was a way to use it with a manually created partition. I am also a little confused. Maybe I am wrong, but don’t you need an LVM partition to use an LVM on it? I know you can create an ext4 filesystem on an LVM, but can you do it the other way around? If you do go with an ext4 you’ll have to convert your raw lvm to qcow2. As explained Biky about that’s the file/snapshot system that I have been using and it’s worked really well for me.
Keep us posted. I’d love to hear how your upgrade goes.
Seems like LVM does not require a partition in order for LVM to work.
gpadmin-local@pve1:~$ sudo gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.6
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries in memory.
Disk /dev/nvme0n1: 1000215216 sectors, 476.9 GiB
Model: SPCC M.2 PCIe SSD
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): F782E2A9-31A6-49BE-A9D6-6F76DABE9C6A
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 1000215149 sectors (476.9 GiB)
Number Start (sector) End (sector) Size Code Name
So yeah, that’s the same drive that I’m about to replace.
Yup. That’s the way that I understand it. The whole drive can to be added as a PV or can be partitioned with LVM partitions to add part of the drive as a PV or several PVs. The file system is then build on the LVM.
I’ve never seen Proxmox do that by not creating a Linux LVM partition. Interesting. I’ll just need to figure out how to convert logical volumes into qcow2 format in my Proxmox server when I get the drive.
When you get the drive, all you have to do is format it as whatever you want, import it into proxmox (or both), go to your VM, go to its attached disks and hit disk action and move storage. If you go with a directory or local NFS (which you can do if you want to test VMs on different server without moving its disk), you will get the option to convert disk to qcow2 by default, but you can select raw disk too. I would say qcow2 if you want snapshots, raw gives you more performance, but you need additional finagling for snapshots, unless you do snapshots in the VM, which you can.
If you use ZFS or BTRFS, I don’t think an option will be given to you, just like if you choose a local LVM, because the file system will just be created as a volume on your volume manager (one of those 3, reminder that the former 2 are not just file systems).
To add the disk, once it arrives, you can format it however you want, you said you wanted an ext4, then you can click on your host on the left side, go to disks and pick create directory. Reminder that when you format your partition, you need to add it to fstab, so that it gets loaded it automatically.
In the same option, you can format your disk as ZFS, LVM-Thin or LVM, without having to touch the command line. Interestingly enough, there’s Ceph there, but no BTRFS. Not that it matters, if you can use ZFS, it is the preferable choice. BTRFS is still the poor man’s ZFS. Which reminds me, I haven’t managed to get ZFS on any aarch64 distros. Fedora is my last hope if it has a proper makefile, because I have no idea how to build zfs-dkms.
I have my 2TB drive delivered today and I have created a new storage called “VMContainers” with ZFS. I wasn’t sure if compression uses more memory or not, so I left it on. I moved all of my disks from 500GB drive to 2TB drive, so I am good to go. However, I still have “raw” instead of “qcow2” for virtual machines.
As “VMContainers” (note the “S”) is in /dev/sdc (my NVME drive), once I replace my 500GB drive with a 2TB drive, how do I get Proxmox to recognize the new drive once it’s in /dev/nvme0n1 (second M.2 slot of my motherboard)? I do not see “VMContainers” or /dev/sdc in an fstab file.
(A few minutes later after I test the snapshot feature…) I have taken a snapshot of OPNsense VM and it’s working as intended, so RAW is fine for me. I should have went with a folder approach instead, but I’m fine with ZFS.
ZFS does not get mounted in fstab, it’s its own kernel module and does stuff itself. Once you format /dev/sdc and copy the images over, then you should be able to zfs export to remove the pool, then once you poweroff your host and swap the SSDs together, you can zfs import the pool.
You are not doing something right then. You should not even see a file format when you copy the disk images.
Bad news! My ASRock B450 Gaming K4 motherboard is not able to detect my SK Hynix 2TB drive.
I’m going to have to live with what I have for now. I want to go with the Gen4 NVME instead of the Gen3 NVME and I should have stuck with Gen3. However, if I upgrade to a B550 or X570 motherboard, that should solve my problem with NVME detection. My motherboard did detect a 500GB NVME, which was a Silicon Power 512GB NVMe M.2 and that worked with a motherboard which I wanted to replace. I tried reseating the NVMe drive, but nope. Not detected on the UEFI/BIOS.
In the meantime, I’ll just live with what I have. 10Gbit/s is not as fast as a real PCIe Gen3 NVMe, but it will suffice for now.
IMO, gen4 doesn’t make much sense at this point. The NAND Flash is still the same, you only get the benefit of the bandwidth until the SSD RAM gets filled up, then you get throttled to the nand speed. Even if I had a PCI-E 4 motherboard, I would still have picked up my 2x Intel 670p QLC 1TB SSDs. And I have 2 more Crucial MX500 2TB SATA SSDs. And this is for my home setup. I used to have 4x Samsung 860 Pro 1TB SSDs in RAID 10 in a HP ProLiant MicroServer gen8 NAS running about 30 VMs (on other hosts, this was just NFS storage).
If I could get Intel Optane for PCI-E Gen4, or any 3D X-point, then I would, but even then, the flash storage is not fast enough to reach gen4 speecs, just low latency.
With m.2 drives, I had luck with the aforementioned Intel 670p, Samsung 970 Pro and Adata XPG SX8200 Pro. All very good drives.