Help me understand LVM KVM

That’s the first post! It seems there are legends within the Linux community. However, the question may not be particularly appealing to everyone—perhaps it is, or maybe it isn’t.

I’d like to clarify my confusion about the differences between KVM and LVM. How can I tell if a VPS or dedicated server provider has mentioned KVM? Also, where does LVM come into play, especially regarding its role in enhancing disk storage?

Is there something relates to Docker?

Welcome to the forum!

KVM or Kernel-based Virtual Machine is a hypervisor built into the Linux kernel. Unlike type 2 hypervisors like Oracle Virtualbox that only runs in userland, KVM is a type 1 hypervisor which runs straight from the kernel, giving you more performance (technically kinda type 1.5, but there’s no really such thing).

LVM or Logical Volume Manager is a technology on Unix and Unix-like systems, which includes Linux, that is used to manage logical volumes. Back in the days of DOS partition tables, where you could only have 4 physical partitions on a disk, LVM would allow you to logically partition one of these physical partitions into… well… logical partitions. DOS had extended partitions, but these were clunky. Gone are those days, a GPT partition tables supports more and larger partitions.

LVM is still a technology widely used today for other reasons, particularly the ability to live expand the partitions live (yes, with the system running) when your OS is virtualized (or even on bare metal when you run a LUN to a SAN). LVM also offers the ability to run snapshots and LVM allows you to allocate block storage either thickly provisioned or thin provisioned.

So let’s take that example of a VPS provider. They might be running a KVM-based hypervisor, like Proxmox or CloudStack and give your VM some storage by using LVM (and presenting that to the VM as block storage - think of block storage kinda like a physical storage device, like a SSD). Then, in the VM, you are running linux and you install it on top of the “virtual disk” by partitioning it with GPT and using LVM on top to logically partition the system (like /, /home, /var etc.).

When it comes to k8s, you can use the same LVM volume group and assign k8s block device inside your VM (just like the underlying hypervisor does for your VM) or you can use LVM to make a new partition, formatted ext4 or xfs, mount that new lvm mount point and point k8s to run pods from there.

I think asking brave AI might’ve been easier, idk.

Yes! Am able to understand KVM that’s bit apart from LVM technology.

As far as VPS that we brought; it offers Block Storage, do you mean that LVM may help to enhance the Local Disk using the LVM ? for e.g 40BG SSD with 500GB Block Storage makes 540GB of Local Disk?

Any VPS offers block storage as that’s the only way your VM can see a virtual disk that it can format and partition and what-not. The backend storage in KVM could be a local LVM pool, or it could be a LUN to a SAN (or LVM on top of the LUN). You don’t need to think too much about this, that’s the provider’s problem (although if you’re looking for performance, it’s important how they define “local” storage).

As far as you / your VM is concerned, you can use LVM to enhance your own experience, e.g. your VM comes with default 40GB SSD for the OS and 500GB of additional storage (call it a 2nd drive), the latter of which you get over 80% utilization after a year and you request for more storage. Now you have 1TB of space on the 2nd drive, but your partitions inside the VM are still only seeing 500GB, despite your drive now being 1TB. You can use LVM to expand the physical volume (pv) to the max size of the disk (to 1TB), then the volume group (vg) will see that change and now you have 500GB of available space in the volume group (unallocated) and another 500GB (the previous one) allocated to different logical volumes (lv).

You can then allocate more space from the VG to any of the LVs on the fly, live, without interrupting your services. Had you not used LVM, you would need to stop your programs running from the 2nd disk partition’s mount path, unmount the path, expand the partition (with fdisk / parted / gparted) and do a file system expansion too (resize2fs or xfs_growfs), mount the now larger volume then start your programs again.

If you use a VPS from a provider that uses old technology, they might not be able to even expand the 2nd disk to 1TB, only give you a 3rd disk of 500GB in size. Then if you didn’t use LVM, you don’t have a way to grow your previous mount volume. You can partition your 3rd disk, then use LVM to create a new PV on the new 3rd disk partition 1, allocate that PV to your previous VG and now you have the ability like above to grow your old mount point. That wouldn’t be possible with straight out formatting your vdisks directly.

As I mentioned before, LVM also offers you snapshot capabilities (but you need free space left in the volume group, left unallocated). So you can realistically recover your own VM to a previous snapshot if say an update borks the OS.

While you can combine the 40 GB with 500GB using LVM, I wouldn’t do that unless you know exactly what’s the storage tier for each (e.g. if the 500GB is just HDD, I’d keep that separate from the 40GB and only use that 40GB for the OS and the 500GB for a bulk storage, let’s say for a mysql database data folder, like /var/lib/mysql).