Proxmox: Replace 500GB Internal NVME to 1TB NVME Drive

I have storage setup in my server as follows:

gpadmin-local@pve1:~$ lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                              8:0    0   1.8T  0 disk 
└─sda1                           8:1    0   1.8T  0 part 
sdb                              8:16   0   3.6T  0 disk 
└─sdb1                           8:17   0   3.6T  0 part 
nvme1n1                        259:0    0 111.8G  0 disk 
├─nvme1n1p1                    259:1    0  1007K  0 part 
├─nvme1n1p2                    259:2    0   512M  0 part /boot/efi
└─nvme1n1p3                    259:3    0 111.3G  0 part 
  ├─pve-swap                   253:7    0     8G  0 lvm  [SWAP]
  ├─pve-root                   253:8    0  27.8G  0 lvm  /
  └─pve-iso                    253:9    0  75.5G  0 lvm  /mnt/iso
nvme0n1                        259:4    0 476.9G  0 disk 
├─VMContainer-vm--100--disk--0 253:0    0    20G  0 lvm  
├─VMContainer-vm--101--disk--0 253:1    0    40G  0 lvm  
├─VMContainer-vm--300--disk--0 253:2    0    20G  0 lvm  
├─VMContainer-vm--102--disk--0 253:3    0    32G  0 lvm  
├─VMContainer-vm--350--disk--0 253:4    0     8G  0 lvm  
├─VMContainer-vm--103--disk--0 253:5    0    32G  0 lvm  
├─VMContainer-vm--103--disk--1 253:6    0    32G  0 lvm  
└─VMContainer-vm--102--disk--1 253:10   0    40G  0 lvm

In order to store more virtual machines, I decided I need more space, so I am planning on getting a 1TB NVME drive and an NVME adapter for migration purposes. My storage name is VMContainers for both VM and LXC. I should mention that I am well-versed in using Linux (CompTIA A+, Network+, Security+, CySA+ (passed this Tuesday), and Cisco CCNA), so I consider myself an advanced Linux user. Plus, I’m used to creating and resizing LVM partitions, so I have that covered.

If I am correct, I cannot rename the storage from VMContainer to VMContainer-tmp. so will assigning a new storage to 1TB NVME over USB with the same name be a problem? I want to create LVM storage for 1TB NVME and not LVM Thin, as I cannot create snapshots using LVM Thin and I did not know that until I learned that a few months ago.

Put it simply, my process is as follows:

  1. Insert a 1TB NVME drive into a USB-C NVME enclosure.
  2. Connect the 1TB NVME drive to the server using a USB-C cable.
  3. Provision a 1TB NVME drive as an LVM storage in Proxmox.
  4. Turn off all VMs and containers.
  5. Migrate the VMs and containers over to a 1TB NVME drive that is in an enclosure.
  6. Shut down the server and unplug from the PSU.
  7. Remove the 500GB NVME drive from the motherboard’s M.2 slot.
  8. Take off the 1TB drive from the enclosure and install the new drive in the motherboard’s M.2 slot.
  9. Plug in the PSU and turn the server back on.
  10. From a live USB image such as Ubuntu or Pop!_OS, mount the root drive and modify the /etc/fstab file so that Linux can read the new LVM partition.
  11. Make sure that the VMs and containers are in place. If all goes well, the drive replacement is a success.

Now I have never tried migrating an LVM partition from one drive to another, but in my case, my 500GB NVME drive has an LVM Thin storage and not LVM. Am I making this so complicated when it comes to storage replacement? Can this really be done?

You said LVM?

vgrename VMContainer VMContainer-tmp

I suppose --100 and all the rest are logical volumes, so you can just rename the volume group, but that shouldn’t matter. Make sure none of the LVMs are mounted or in-use. Then you can create a new VG on the 1TB SSD.

If I were you, I’d buy a 2TB NVME drive. Well, I just bought last weekend 2x 2TB SATA SSDs for my Odroid HC4 as a temporary stop-gap, to do some file backups from a ZFS pool, to a temporary BTRFS pool, which I’ll destroy once I get ZFS to work on aarch64, but that’s another story.

Speaking of which, is there any reason why you are not using BTRFS or ZFS?

In any case, not sure if Proxmox allows you to rename the VG, so you would have to transfer the LVMs manually to the other drive. There are ways to do it with LVM tools alone, with things like vgsplit, vgmerge and pvmove, but I have no idea how these work.

Easiest thing to do is to create a new physical volume on the new nvme drive, create the VG VMContainer-vm, create new logical volumes with the same name as the others and the same size, then basically dd if=/dev/VGContainer-tmp/vm--100--disk--0 of=/dev/VGContainer/vm--100--disk--0. A pain in the butt, so you should write a script to do it for each VM disk.

IMO, it would be way easier to make a ZFS pool named VMContainer and just migrate from LVM to ZFS via the Proxmox GUI. WAY easier.

1 Like

2TB is $100 extra and does not save me some cost per gigabyte unlike going from 500GB to 1TB, but that would give me more room for virtual homelab setups. Proxmox does not seem to allow the dynamic allocation of space for VMs and Linux containers unlike the barebones LXC and KVM with Virt-Manager that I used to use without setting up Proxmox, but since I have a Mac Mini and Linux desktop, I think it would be good to migrate over to Proxmox so I can access virtual machine console from the web interface instead of using virt-manager.

Anyway, thank you and I appreciate your help.

I’ve replaced a drive on my ubuntu server by adding the new drive as a new PV using the LVM tools. I moved the extents from the original physical volume to the new drive. Then, after removing the old PV from the VG I could safely remove the old drive. I basically followed the steps laid out here.

It worked flawlessly on ubuntu server, but that was a full LVM. I expect a thin LVM would work the same. As long as you have a good backup, you really don’t have a lot to lose. Since you are going to be bringing it down to swap the drive location a simple vm restore would be annoying, but easy to do on a fresh drive, if it doesn’t work out.

1 Like

I’m going from LVM Thin in a 500GB drive to LVM in a new 1TB (or 2TB) drive with a BTRFS file system (or maybe ZFS that is listed in Proxmox as storage). Thank you for providing a link for replacing the drive.

I’m thinking that “VMContainer-vm–100–disk–0” and “VMContainer-vm–101–disk–0” inside /dev/mapper is its own volume group as the 100 and 101 has its own disk for a virtual machine. I’m going to place an order for a 2TB NVME drive. I don’t think 100, 101, 102, etc. are logical volumes because they are part of the LVM Thin pool. And that’s why I am trying to migrate to a file system that allows me the ability to create snapshots in a virtual machine.

Update: I placed an order for an SK Hynix 2TB NVME drive along with a USB-C NVME enclosure. Lots and lots of drive space to play with for spinning up more virtual machines and Linux containers. Maybe one day I could learn how to setup a k8 (Kubernetes) cluster and become CompTIA Linux+ certified!

(I am recently CompTIA CySA+ certified as of August 23, 2022 and I have renewed three of my CompTIA certifications: A+, Network+, and Security+.)

1 Like

Congrats on the certifications! I forgot so mention that when I saw your CySA+ was recent.

So question? How have you set up your nvme0n1 to have each vm set up as a separate VG? I was never given an option like that. I’m using proxmox 7.2 if that makes any difference. Either way, you should still be able to move each lvm to the new drive, just means you’ll have to repeat the process. I expect it should be a fairly smooth process. ( I apologize, because right before saying that, or “this should take 5 minutes” usually is the beginning of a disaster. I hope that’s just for me though.)

Why would you want to use LVM snapshots? The snapshot system of proxmox itself is pretty good. At least in my limited experience. I have almost all of my virtual disks on an ext4 partition and the snapshots have worked like a charm. No LVM’s required.

Proxmox just uses whatever snapshot engine is behind the storage. If LVM/Thin, then LVM snapshots. If ZFS, ZFS snapshots. If BTRFS, that. If qcow2, then that. If you use NFS and don’t use qcow2 for the vdisks (i.e. you just use raw or for containers just directories), then you are limited to taking snapshots on your NAS. Well, if you run LVM, ZFS or BTRFS on your NAS, you can totally take the snapshots there.

Thank you.

Actually, I should have used vgs and lvs commands and I looked in Proxmox and it is LVM and not LVM Thin. I’m trying to get a handle with Proxmox’s storage. My apologies for the confusion.

root@pve1:~# vgs
  VG          #PV #LV #SN Attr   VSize    VFree   
  VMContainer   1   9   0 wz--n- <476.94g <204.94g
  pve           1   3   0 wz--n- <111.29g       0 
root@pve1:~# lvs
  LV            VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0 VMContainer -wi-ao----  20.00g                                                    
  vm-101-disk-0 VMContainer -wi-ao----  40.00g                                                    
  vm-102-disk-0 VMContainer -wi-ao----  32.00g                                                    
  vm-102-disk-1 VMContainer -wi-ao----  40.00g                                                    
  vm-103-disk-0 VMContainer -wi-ao----  32.00g                                                    
  vm-103-disk-1 VMContainer -wi-ao----  32.00g                                                    
  vm-104-disk-0 VMContainer -wi-------  48.00g                                                    
  vm-300-disk-0 VMContainer -wi-ao----  20.00g                                                    
  vm-350-disk-0 VMContainer -wi-ao----   8.00g                                                    
  iso           pve         -wi-ao---- <75.54g                                                    
  root          pve         -wi-ao----  27.75g                                                    
  swap          pve         -wi-ao----   8.00g

Anyway, I think I will use gdisk to create an EXT4 partition for the storage of containers and virtual machines. I would rather configure LVM myself instead of having Proxmox creating logical volumes per virtual machines and containers.

And no, I’m not using NFS. I only have a single server with two drives (one for OS and ISO files, one for VMs and containers). So yeah, I feel like I’m new to Ceph and ZFS and all that stuff.

This is what I get when I went to a snapshot section for either a virtual machine or Linux container:

The current guest configuration does not support taking new snapshots

And this is what’s causing me confusion. Proxmox does have a lot of tools such as firewalls and networking, including a feature for telling me that I need to subscribe in order to receive stable updates, but then I was so used to the command line such as pvcreate, vgcreate, lvs, vgs, and so many commands and Proxmox gave me the feeling that it abstracts a lot of commands from the web interface. I was used to virt-manager and plain LXC commands such as lxc-create, lxc-fs, and lxc-attach before I switch to Proxmox. I was also used to netplan.io as well.

However, Proxmox does have a nice networking interface for network configuration; however, it does restrict me to using a convention for bridge such as vmbr0 instead of ethbr0 for an Ethernet interface (OPNsense uses the interface for WAN). I like to follow my own convention so I can identify which bridge assigns to which interface. If no bridge is assigned to any physical interface, then I use vmbr0 as a convention for spinning up virtual machine networks. That is something I miss before I switch to Proxmox.

Thanks for this reply. I suppose that makes sense. All of my VM’s are using the qcow2 virtual disks, so I just assumed the snapshot system was a proxmox specific tool. I haven’t really done much digging into KVM tools.

Let me know how this works out. I had to initialize my second disk with the web GUI in order for my proxmox install to recognize and use it. I didn’t dig to see if there was a way to use it with a manually created partition. I am also a little confused. Maybe I am wrong, but don’t you need an LVM partition to use an LVM on it? I know you can create an ext4 filesystem on an LVM, but can you do it the other way around? If you do go with an ext4 you’ll have to convert your raw lvm to qcow2. As explained Biky about that’s the file/snapshot system that I have been using and it’s worked really well for me.

Keep us posted. I’d love to hear how your upgrade goes.

Seems like LVM does not require a partition in order for LVM to work.

gpadmin-local@pve1:~$ sudo gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.6

Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries in memory.
Disk /dev/nvme0n1: 1000215216 sectors, 476.9 GiB
Model: SPCC M.2 PCIe SSD                       
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): F782E2A9-31A6-49BE-A9D6-6F76DABE9C6A
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 1000215149 sectors (476.9 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
gpadmin-local@pve1:~$ 

So yeah, that’s the same drive that I’m about to replace.

Yup. That’s the way that I understand it. The whole drive can to be added as a PV or can be partitioned with LVM partitions to add part of the drive as a PV or several PVs. The file system is then build on the LVM.

1 Like

I’ve never seen Proxmox do that by not creating a Linux LVM partition. Interesting. I’ll just need to figure out how to convert logical volumes into qcow2 format in my Proxmox server when I get the drive.

I’ll keep you posted when I get the new drive.

When you get the drive, all you have to do is format it as whatever you want, import it into proxmox (or both), go to your VM, go to its attached disks and hit disk action and move storage. If you go with a directory or local NFS (which you can do if you want to test VMs on different server without moving its disk), you will get the option to convert disk to qcow2 by default, but you can select raw disk too. I would say qcow2 if you want snapshots, raw gives you more performance, but you need additional finagling for snapshots, unless you do snapshots in the VM, which you can.

If you use ZFS or BTRFS, I don’t think an option will be given to you, just like if you choose a local LVM, because the file system will just be created as a volume on your volume manager (one of those 3, reminder that the former 2 are not just file systems).

To add the disk, once it arrives, you can format it however you want, you said you wanted an ext4, then you can click on your host on the left side, go to disks and pick create directory. Reminder that when you format your partition, you need to add it to fstab, so that it gets loaded it automatically.

In the same option, you can format your disk as ZFS, LVM-Thin or LVM, without having to touch the command line. Interestingly enough, there’s Ceph there, but no BTRFS. Not that it matters, if you can use ZFS, it is the preferable choice. BTRFS is still the poor man’s ZFS. Which reminds me, I haven’t managed to get ZFS on any aarch64 distros. Fedora is my last hope if it has a proper makefile, because I have no idea how to build zfs-dkms.

1 Like

I have my 2TB drive delivered today and I have created a new storage called “VMContainers” with ZFS. I wasn’t sure if compression uses more memory or not, so I left it on. I moved all of my disks from 500GB drive to 2TB drive, so I am good to go. However, I still have “raw” instead of “qcow2” for virtual machines.

As “VMContainers” (note the “S”) is in /dev/sdc (my NVME drive), once I replace my 500GB drive with a 2TB drive, how do I get Proxmox to recognize the new drive once it’s in /dev/nvme0n1 (second M.2 slot of my motherboard)? I do not see “VMContainers” or /dev/sdc in an fstab file.

(A few minutes later after I test the snapshot feature…) I have taken a snapshot of OPNsense VM and it’s working as intended, so RAW is fine for me. I should have went with a folder approach instead, but I’m fine with ZFS.

ZFS does not get mounted in fstab, it’s its own kernel module and does stuff itself. Once you format /dev/sdc and copy the images over, then you should be able to zfs export to remove the pool, then once you poweroff your host and swap the SSDs together, you can zfs import the pool.

You are not doing something right then. You should not even see a file format when you copy the disk images.

biky-tr# zfs list
NAME                                                                                        USED  AVAIL     REFER  MOUNTPOINT
zroot                                                                                       804G   119G       96K  none
zroot/ROOT                                                                                 25.6G   119G       96K  none
zroot/ROOT/void                                                                            25.6G   119G     25.6G  /
zroot/home                                                                                 5.56G   119G     5.56G  /home
zroot/kvm                                                                                   772G   119G       96K  none
zroot/kvm/itarchive                                                                        31.9G   119G     31.9G  -
zroot/kvm/nixos-22.05                                                                      15.5G   134G       56K  -
zroot/kvm/obsd-d0                                                                          8.87G   125G     2.69G  -
zroot/kvm/obsd-d1                                                                          2.06G   121G       60K  -
zroot/kvm/prometheus                                                                       17.6G   119G     17.5G  -
zroot/kvm/win10                                                                            66.0G   165G     20.0G  -
zroot/kvm/win10-data                                                                        206G   302G     23.3G  -
zroot/kvm/win8-recov                                                                        424G   119G      390G  -
zroot/lxd                                                                                  7.53M   119G       96K  legacy
zroot/lxd/containers                                                                        412K   119G       96K  legacy
zroot/lxd/containers/alp                                                                    316K   119G     6.30M  legacy
zroot/lxd/custom                                                                             96K   119G       96K  legacy
zroot/lxd/deleted                                                                          6.75M   119G       96K  legacy
zroot/lxd/deleted/containers                                                                 96K   119G       96K  legacy
zroot/lxd/deleted/custom                                                                     96K   119G       96K  legacy
zroot/lxd/deleted/images                                                                   6.38M   119G       96K  legacy
zroot/lxd/deleted/images/64cdedcf7f375c35831086cef6bdda145354051f26a642b7ece75c885fcb4c5e  6.29M   119G     6.28M  legacy
zroot/lxd/deleted/virtual-machines                                                           96K   119G       96K  legacy
zroot/lxd/images                                                                             96K   119G       96K  legacy
zroot/lxd/virtual-machines                                                                   96K   119G       96K  legacy

When you move a disk image, you should see raw greyed out, because you should not be allowed to change the format. That only becomes available to pick when you have a directory or a NFS share.


That’s how it should look like.

Bad news! My ASRock B450 Gaming K4 motherboard is not able to detect my SK Hynix 2TB drive.

I’m going to have to live with what I have for now. I want to go with the Gen4 NVME instead of the Gen3 NVME and I should have stuck with Gen3. However, if I upgrade to a B550 or X570 motherboard, that should solve my problem with NVME detection. My motherboard did detect a 500GB NVME, which was a Silicon Power 512GB NVMe M.2 and that worked with a motherboard which I wanted to replace. I tried reseating the NVMe drive, but nope. Not detected on the UEFI/BIOS.

In the meantime, I’ll just live with what I have. 10Gbit/s is not as fast as a real PCIe Gen3 NVMe, but it will suffice for now.

Thank you everyone.

IMO, gen4 doesn’t make much sense at this point. The NAND Flash is still the same, you only get the benefit of the bandwidth until the SSD RAM gets filled up, then you get throttled to the nand speed. Even if I had a PCI-E 4 motherboard, I would still have picked up my 2x Intel 670p QLC 1TB SSDs. And I have 2 more Crucial MX500 2TB SATA SSDs. And this is for my home setup. I used to have 4x Samsung 860 Pro 1TB SSDs in RAID 10 in a HP ProLiant MicroServer gen8 NAS running about 30 VMs (on other hosts, this was just NFS storage).

If I could get Intel Optane for PCI-E Gen4, or any 3D X-point, then I would, but even then, the flash storage is not fast enough to reach gen4 speecs, just low latency.

With m.2 drives, I had luck with the aforementioned Intel 670p, Samsung 970 Pro and Adata XPG SX8200 Pro. All very good drives.

tl;dr just get gen3, it’s fine.

1 Like

As I mentioned, I already have a 2TB NVMe drive and I’m happy with what I have but only using a USB-C enclosure.

Anyway, thanks for your help.

1 Like

Please note! If you want to go with ZFS and are using LXC containers, you might run into problems when trying to get MySQL server installed.

2022-09-20T09:47:26.204923Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.30-0ubuntu0.20.04.2) initializing of server in progress as process 8975
2022-09-20T09:47:26.213005Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-09-20T09:47:26.743515Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2022-09-20T09:47:28.116696Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2022-09-20T09:47:29.621929Z 6 [System] [MY-013172] [Server] Received SHUTDOWN from user boot. Shutting down mysqld (Version: 8.0.30-0ubuntu0.20.04.2).
2022-09-20T09:47:33.781922Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30-0ubuntu0.20.04.2) starting as process 9027
2022-09-20T09:47:33.798621Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-09-20T09:47:33.948643Z 1 [ERROR] [MY-012962] [InnoDB] The redo log file ./#innodb_redo/#ib_redo5 size 2531328 is not a multiple of innodb_page_size
2022-09-20T09:47:33.948820Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.
2022-09-20T09:47:34.313897Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
2022-09-20T09:47:34.314275Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2022-09-20T09:47:34.314302Z 0 [ERROR] [MY-010119] [Server] Aborting
2022-09-20T09:47:34.314988Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.30-0ubuntu0.20.04.2)  (Ubuntu).
2022-09-20T09:47:35.283427Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30-0ubuntu0.20.04.2) starting as process 9128
2022-09-20T09:47:35.296797Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-09-20T09:47:35.430839Z 1 [ERROR] [MY-012962] [InnoDB] The redo log file ./#innodb_redo/#ib_redo5 size 2531328 is not a multiple of innodb_page_size
2022-09-20T09:47:35.430889Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.
2022-09-20T09:47:35.811903Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
2022-09-20T09:47:35.812080Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2022-09-20T09:47:35.812111Z 0 [ERROR] [MY-010119] [Server] Aborting

I stumbled into this thread:

Note to self: Better to go with BTRFS or Directory in the future. Don’t use ZFS for LXC and VMs.

That’s weird, I have a fedora lxc container running mysql on zfs. Care to pass me the commands you used? I’m not a DB guy.


2022-09-20_14-09-1663673896