Hi,
I recently got a new hard drive and I was planning on using it to setup a simple RAID 1. I don’t have a home lab or anything, so I wanted to keep things as simple as possible and figured that using LVM for this would be a good idea. Setting this up was pretty straight forward, and it worked fine. But before committing to use this long term, I wanted to test how would I recover my files should something happen to one of the drives.
When I unplug one of the drives things continue to work normally, I can access the files in the volume, etc. But when I unplug both drives, and then plug only one of them, things start to get messy very quickly. I only managed once to regain access to my files by running pvscan
and then expanding the volume group to include the new drive using vgexpand
. The rest of attempts went poorly as I tried using things like lvconvert --repair
and similar.
Running lsblk
shows that something is not right. Here, only one of the drives that I used to setup the RAID 1 is plugged in, /dev/sda
, but it shows up as a regular drive. At the same time, it shows the logical volume mirrors there but I can’t mount it.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 57.6G 0 disk
└─sda1 8:1 1 57.6G 0 part
vg_backup-lv_backup_rmeta_1 254:2 0 4M 0 lvm
└─vg_backup-lv_backup 254:4 0 1G 0 lvm
vg_backup-lv_backup_rimage_1 254:3 0 1G 0 lvm
└─vg_backup-lv_backup 254:4 0 1G 0 lvm
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2 259:2 0 231.4G 0 part /
└─nvme0n1p3 259:3 0 977M 0 part [SWAP]
From what I’ve found out it seems that the expected course of action would be to replace the drive as soon as possible. This makes perfect sense but of course I’d also like to be able to recover my files quickly. Does anyone have any recommendations or resources on how to do this? I know that things like BTRFS and ZFS exists but I’m a slow learner and I really want to keep things simple. Eventually, I might want to try a bigger setup where this may make more sense.
Thanks!