ZFS problem when attaching extra drive

Hello, I have been following through the excellent series of " Ubuntu Server 18.04 Essentials" and learning the terminal commands (although I am running this on the desktop version of Ubuntu 21.10).

I have successfully created a ZFS pool (4x drive raidz (sdb, sdc, sdd, sde)), sorted out the permissions and shares and then copied data to the new ZFS pool across the LAN.

Next, I turned the machine off, attached another drive (SATA) to copy data to the new pool and had a nasty surprise when I came back, the pool had vanished.!

I identified the problem, the new drive had been attached as (sdc), I decided that the best thing to do was to turn the machine off, check that the new drive was in SATA port 6 and tried again, the same thing happened.

Again I turned it off, unplugged the new drive, rebooted and decided to import my pool now that my 4 drives were again (sdb, sdc, sdd, sde), and it reappeared and worked again. I repeated this again, just in case this was a one-off weird thing, and it happened again, the new drive appeared as (sdc) and seemingly broke my raidz pool.! I have of course turned the machine off, disconnected the new drive and again imported my pool.

How do I resolve this now for this drive and future SATA drives I will be attaching.

Kind Regards :slight_smile:

Welcome to the forum!

You have to use UUIDs (Universally Unique Identifiers) or WWN (World Wide Names), instead of the ID / label assigned by the motherboard (sdX, vdX, mmcblkXnY, nvmeXnY). You can find the UUID of the HDD by running

ls -l /dev/disk/by-uuid/

You can find the WWN by running:

ls -l /dev/disk/by-id/

WWN always start with wwn-*

I don’t know how you can change it in ZFS though after the fact, I usually use WWN right before I create the pool in the first place.

As a side note, you can find other disk identifiers under /dev/disk/by-*, like label, partlabel and path. They are all useful in certain situations, it’s good to know about them.

Try this at your own risk, you might lose data if you don’t have backups. You can remove one disk at a time from the pool and add it back using the wwn. Make sure you use the wwn of the disk, not of the partition created.

Actually, I found that you can export (take the pool offline) and import it back using the by-id id. My comment is so messed up. Still, assume you will lose data anyway, that’s always a good approach.

zpool status
zpool export tank
## note: your pool will be offline, so no read writes will be made to it
zpool import -d /dev/disk/by-id tank
zpool status

Replace tank with your pool name. This should work without losing data. Give it a reboot and another zpool status and see if it shows with wwn instead of the disk ids.

Thank you :smiley::+1:

I will move the data out of the pool onto another drive, nuke the ZFS pool and create it again fresh.

I assume that by doing it this way it won’t then matter which port I have the drive plugged into.! If this is the case, that’s great news :smiley:

I will get onto that tomorrow, thanks.

Before you nuke it, try to export and import it back again, as described in my previous comment. But, yeah, do a backup, just in case.

Thank you, will try that now :slight_smile:

EDIT: I haven’t had a chance to do that yet, I re-imported the pool and then ran into other problems… Either way, I have copied the data in the pool, so when I am free tomorrow, I will try exporting the pool and then importing it using the WWN codes (which I have identified and copied in preparation) and will let you know how that works out.

Whether it works or not, I will use the WWN ID method to create a new pool and then tackle the other issues as I come across them, but we would all like to find out how it goes exporting the pool and re-importing it using the WWN ID’s from an academic standpoint.

Many thanks again :smiley:

1 Like

I have tried numerous permitations incase I am misunderstanding your commands and or the syntax (both likely) but I am constantly getting the same response “too many arguments”, so here I am again…

Here is the what I tried to begin with, and have saved the command that I finally used, but wont post it right now so as to not muddy the water.

sudo zpool import -d wwn-0x50026b7235056bc0 wwn-0x50026b7235056bc0 wwn-0x50026b7235056bc0 wwn-0x50026b7235056bc0 /storage1

Thanks.

PS: I wont nuke the pool until you have an idea of whether this does actually work or not for future reference, you are helping me, I would like for my problem to help others. I have other things to be getting on with anyway.

The commands given above, except tank (replace with the name of your pool) and obviously the comment, are actually to be copy pasted word by word. So you all you need to do is
zpool export yourpoolname
zpool import -d /dev/disk/by-id yourpoolname

And should be good to go.

Thank you so much :smiley:

I was quite surprised that the commands were so very simple, but they worked perfectly first time, and I have since connected the same drive that was breaking the ZFS array previously as a USB drive and all was well, I rebooted as a test and all was well, and the drive is now connected via SATA, and a couple of reboots later and all is still well. :+1:

Why the ID method of setting up ZFS arrays is not seemingly the default (that is suggested) I find quite odd, and this :point_up_2: was the first time I saw it suggested out of several examples and video tutorials to set up a ZFS array.

When I have sorted out my messy data, I will wipe a previous Windows RAID array and setup a second ZFS array using the ID method as it seems to be a far superior way of doing things.

Again, thank you for your help, and apologies for not replying sooner with my results.

1 Like

Don’t worry about it.

Well, not everyone is aware of IDs. Some Linux / Unix folks are aware of UUIDs and use them (like in fstab), but the examples are given with the name of the disks because it’s easier to explain. Still, yeah, in a production (or any sane environment really), using IDs is way better and should be presented more, so people gain the knowledge of it.

1 Like