Hi,
I currently have a proxmox ve instance with a 2 containers running. Since I recently got a better computer for proxmox, I have create a new proxmox ve install on that newer computer and was trying to create a cluster, so I can move the existing containers to the newer computer. However, I get the following error:
‘this host already contains virtual guests’ when I use the ‘join cluster’ functionality.
What did I do:
- created a cluster on the new computer (since I want that computer to stay up after the migration and want to use that IP address). I did this using the create cluster function on the cluster sector of the Datacenter (on newer computer)
- Copied the information using the copy information button
- clicked on the ‘join cluster’ button on the cluster section of the old computer Datacenter
- provided the password for the newer computer
- receive the error above.
I could really use some help as I am trying to move over to the newer computer, since the older computer uses a lot more power and I want to shutdown the old computer once the migration has been done.
Welcome to the forum!
Stay away from clusters. That’s how I lost the ability to start VMs on my own home cluster (after 2 hosts died in an extended power outage that my UPS couldn’t keep up with).
Proxmox clustering is nice for a couple hosts (like 5+, I think up to 25 or something?), but when it comes to <=3 hosts, it’s messy. And because you only want 1 host to be up, definitely stay away from clusters.
I’d say to format the new host again, get rid of all the clustering part, then move the VMs manually to the new host. You can do so by powering off the VMs and depending on the backend storage, either take a snapshot and a zfs-send of the zvols created by proxmox for the VMs, or copy the qcow2 files over to the new host. If you’re using LVM, you can convert the volumes to qcow2 to either local or NFS storage, then transfer them over.
Finally, to get the VMs on the other host, there’s a folder under /etc/pve or something, containing the VM names, like 101.conf, 102.conf and so on, somewhere in a qemu folder. You can create directories and copy these files on the new host. Then, if the VM disks copied over are named the same on the destination and in the same location, you should be able to just launch the VMs. Otherwise, you might need to mess with the VM configuration in the GUI to point the VM to the proper vdisks.
While clustering might not be for everyone I would not say that you should stay away from clustering all together. Keep in mind that clustering requires a minimum of 3 nodes to function properly, as you need to maintain quorum. Now the third node can be a QDevice or a low powered node and solely for keeping quroum.
With that said it sounds like you created a cluster on your new node and tried to join the old node with VMs/LXCs already on it to the cluster. This will produce and error as you can create a cluster on a node with VMs/LXCs but you cannot join a cluster with those present.
What you can do is remove the new node from a cluster and go back to standalone mode and then create the cluster on the old machine and then join the new machine to that cluster.
see the following for more help on getting the new machine back to standalone: Separate a Node Without Reinstalling
I personally tried that when my cluster had 2 failed nodes. It didn’t work, it still refused to get out of the cluster. It could be that I did something wrong, I can’t say that couldn’t have been the case, but given how much I tried to resurrect that as a standalone node, I don’t think it was me.
I managed to find a workaround to just set the quorum to 1 and start my VMs that way. I backed up everything and now the original node is basically a carcass. I run a container or 2 from time to time on it, but the build is dead (and I can’t access it to reinstall fresh, it’s way too far away).