I have a laptop with 2 drives. One is a 240GB m.2 and the other is a 512GB 2.5” SSD which I want to use as additional storage. Is it worth setting the 2 drives up using LVM or am I better off mounting the drive normally?
While I’ve seen Jay’s video on LVM and agree it can do some fancy stuff, I think it’s mostly suited for servers and systems that need to be running continously, where you need to manage storage devices on a fine level.
For desktop installations, I think it’s not really necessary, and I would just go with 1 partition per disk. Of course, you may see some benefits that I don’t, so decide for yourself.
I actually decided to just go with mounting the drive to a folder in my home folder in the end
LVM is one of those things that’s useful depending on your use-case, and in some ways, you’re creativity. For Arch machines, for example, I use it to take snapshots of the root filesystem before running updates. After a week, if the updates are fine, I finalize the snapshot. If something was broken by an update, I roll the snapshot back and try the update again later. It can be useful on laptops as well as servers, depending on what you want to accomplish.
I really like the idea to use an LVM snapshot of the root file system to prepare against breaking updates. I want to have the ability to simply “rollback” after a full system upgrade.
So after learning about LVM in Jay’s video, I did the following:
- start arch installation media
- decrypted my partition (with the volume group on it)
- created a snapshot of lv_root
- re-started my system and did a full system upgrade
- reboot into installation media
- merged the snapshoot with lv_root (to simulate a rollback)
That’s when I got stuck After I unlocked my drive, I could see only two error messages:
- Failed to start Load Kernel Modules.
- Failed to start Docker Application Container Engine.
My research so far explains the issue with the new kernel version that was part of my system upgrade. This new kernel is located on /boot (it’s own partition) but the kernel modules are in /lib (on my root partition). During the rollback I restored the kernel modules from the old kernel, however the kernel itself was untouched.
Does anybody know of a practical way how to solve this?
Welcome to the forum!
If you are using GRUB, the easiest way is to just select an older version of the kernel from the boot options and you should be fine. It’s going to be a bit of trial and error if you don’t remember which kernel goes with which rollback.
Hey @Biky, thanks a lot for the reply.
So for this to work I would need to manually install specific kernel versions first and create GRUB entries for them? Because currently I only have entries for Linux (latest) and Linux (LTS).
No, if you have GRUB, everything should show itself automatically. When you boot, you just go to the Advanced boot options in the menu (usually 2nd entry) and select an older version of the kernel. If you know which one you restored to, then select that one.
And you didn’t upgrade from LTS to latest?
Hmm. I don’t have this option. I use Arch and whenever I upgrade my kernel, pacman will overwrite the old kernel. So I don’t think it’s still installed.
Oooh, ok, now that makes sense. Yeah, Arch just removes the old kernel entries IIRC. Hmm… yeah, I don’t see an easy way to fix that, other than running a grub-mkconfig manually. I never used the snapshot LVM stuff. When you rollback to the last (or whatever version) snapshot of the root partition, do you have to reboot? In your previous message, I see you rebooted into the installation media. If so, that’s an easy fix, after you rollback your lv_root, you mount the root and boot partitions, then arch-chroot into your installation and run grub-mkconfig. That should work I think. That is, if you are using GRUB and not gummi-boot (A.K.A. systemd-boot).
Thank you very much for your input. Much appreciated.
I am not sure if the grub entries are the problem. As far as I understand, in Arch systems you have different kind of kernels (mainline, LTS, …). I do have both installed, mainline (package name: linux) and LTS (package name: linux-lts). However, if I install a newer version of the mainline kernel, it is going to replace the older one. This means the kernel version I would need (in order to boot after I restored my snapshot) is no longer installed.
I fear, the only way to achieve this would be to also backup my boot partition. But this would complicate the entire process quite a bit (since it is a physical partition and not part of my LVM).
I think I figured out a solution myself. Let me explain:
My root issue is that if I restore a snapshot after I upgraded my kernel (from version
y) my root file system holds the kernel modules of version
x (cause it got restored) while I am running still the kernel
y which is stored on
/boot and was not affected by the restore.
All I need to do is install the kernel of version
x again (while
/boot is mounted) to fix the mismatch. Pacman of course still has the binaries stored in
So after I restored my snapshot, I booted into an installation media, mounted my root filesystem at
/mnt and boot partition at
/mnt/boot, chrooted into
/mnt and executed:
pacman -U /var/cache/pacman/pkg/linux-x-x86_64.pkg.tar.zst
This installed the kernel of version x on
/mnt/boot and everything booted fine.
If this is an ok way to solve this, I would of course be interested in automating this somehow (maybe via systemd.target) so I don’t have to manually boot into a live system each time.
Feedback, criticism or help much appreciated.