Convert Pi ubuntu image to VM machine

I have a raspberry Pi running ubuntu server os 20.04

I’d like to create a virtual machine in prox mox that uses the exact image from the Pi micro sd card that has said server os installed on it.

is this possible? basically I’d like to remove the sd card from the Pi and copy it to a file and then import that file into proxmox and run it as a VM. is there a procedure to do this?

thanks

I’m not exactly sure how to do that in Proxmox. You need to emulate CPU architecture. For the operation itself, you can initially live-boot any distro in a VM and dd the Pi img file to the disk in the VM, as opposed to a physical SD card. But from there, I am not sure. All I know is that you need to use qemu to emulate ARM instructions.

But do note that a Pi image won’t have drivers to run under a VM. Things like ethernet, display etc. will probably not work.

Here comes the questioning part. Why do you want to run a Pi image on proxmox? Is it something special about that particular image that you have? Do you want to retire / repurpose the pi for something else and virtualize what’s inside that SD card? I am assuming you have a goal in mind, maybe you can share your intents and we may find better options than just trying to emulate ARM. Also worth mentioning is that the Pi bootloader is completely different than what you’d find on BIOS or UEFI systems, which is why you can’t just boot a generic Linux distro and you have to create certain files for the SoC on the Pi (or any phone basically) to boot into the OS.

I would suggest that whatever you were running on the Pi, you just reinstall on a clean x86 ubuntu server VM and just transfer your configs and data and other files to the VM. Some things may need some modifications, but it shouldn’t be too hard. But if you want to run a specific program compiled for ARM, like a certain Docker image or whatnot, that’s going to be a bit more complicated.

1 Like

I installed next cloud on Pi using’s Jay’s learn linux youtube video and would rather not start from scratch and have to rebuild the entire thing on a prox mox server vm. it was a bit of a process and i have alot of things configured since then on the server that i dont’ want to have to go through and redo again either

1 Like

I’ll try to see a guide on how to migrate, it will probably involve 5 to 20 commands at most if my experience managing similar types of services is any good. Likely just doing a DB dump and copying some configuration files.

Now I’ll have to watch Jay’s next cloud video to see what steps he went through. That’s the nice thing about having documented procedures, but I wish we would have a transcript of the video, or simply just the commands ran in a text file or something, like how Wendell puts wiki articles on L1T forums for his instruction videos.

thanks for the help! here’s the video i watched

https://www.learnlinux.tv/nextcloud-complete-setup-guide/

Oh, the older guide. Ok, I’ll try giving it a go.

I’ve already started getting nextcloud set up, in order to document the backup and restore process, but I had some instability with my system. I’ve added an ubuntu focal lxd container and when I was about to follow the install steps, my PC started misbehaving (I’m using a RPi 4 as my main PC, don’t judge).

I am guessing it was the lack of RAM, I’ll come back after I limit my resource consumption a bit and restrict the container to 2 GB of RAM, hopefully tomorrow. To be honest, I don’t think it should be too difficult.

I don’t know if next-cloud has an export feature built-in, like GitLab has, actually I haven’t looked into that, but if it doesn’t, it still shouldn’t be more difficult than copying over your TLS certs and nginx/apache config, installing next-cloud and mariadb, doing a mysqldump on the Pi, rsyncing the nextcloud files and the DB dump to your VM, then creating a new DB on the VM and doing a mysql import using the dump. And then all that would be left would be to just start the services and be ready to go.

thanks for going through this, Im reasonably new to linux and data bases but not a complete noob so any help like this is much appreciated

1 Like

I didn’t go through the full process of setting this up, especially did not go through making a letsencrypt account, as I don’t have a domain to use, so I used a local domain (which don’t work with letsencrypt). But I believe I got most of what is necessary to start the backup and restore processes. Should be pretty easy… hopefully.

One thing to note: do not blindly copy-paste those, replace stuff under <> with your own.

First, here are the original setup steps, just for reference:

sudo apt update
sudo apt dist-upgrade
sudo apt install wget mariadb-server php php-apcu php-bcmath php-cli php-common php-curl php-gd php-gmp php-imagick php-intl php-mbstring php-mysql php-zip php-xml unzip python3-certbot-apache
wget https://download.nextcloud.com/server/releases/nextcloud-23.0.3.zip
sudo mysql_secure_installation
# enter
# y
# set pass
# y
# y
# y
# y
sudo mariadb
CREATE DATABASE nextcloud;
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud'@'localhost' IDENTIFIED BY 'OURpassword';
FLUSH PRIVILEGES;
sudo phpenmod bcmath gmp imagick intl
unzip nextcloud*.zip
mv nextcloud ubu-nc.local
sudo chown -R www-data:www-data ubu-nc.local
sudo mv ubu-nc.local /var/www/
sudo a2dissite 000-default.conf
sudo systemctl reload apache2
sudo nvim /etc/apache2/sites-available/001-ubu-nc.local.conf
sudo nvim /etc/php/7.4/apache2/php.ini
# memory_limit = 512M
# upload_max_filesize = 200M
# max_execution_time = 360
# post_max_size = 200
# date.timezone = America/New_York
# opcache.enable=1
# opcache.interned_strings_buffer=8
# opcache.max_accelerated_files=10000
# opcache.memory_consumption=128
# opcache.save_comments=1
# opcache.revalidate_freq=1

sudo a2enmod dir env headers mime rewrite ssl
sudo systemctl restart apache2

# access nextcloud
# set a user and pass
# insert nextcloud mariadb user and pass / db name nextcloud / install recommended apps

sudo nvim /var/www/ubu-nc.local/config/config.php

# add:
# 'memcache.local' => '\0C\Memcache\APCu',

sudo chmod 660 /var/www/ubu-nc.local/config/config.php
sudo chown root:www-data /var/www/ubu-nc.local/config/config.php
sudo php /var/www/ubu-nc.local/occ db:add-missing-indices
sudo certbot --apache -d ubu-nc.local

Ok, all good thus far. Let’s create a new VM now. Ubuntu 20.04 comes with rsync by default, but if which rsync returns nothing, do a sudo apt update && sudo apt install rsync on both the Pi and the VM. After that, make sure you update both the Pi and the VM by running sudo apt update && sudo apt dist-upgrade. I would say keeping them in-line is pretty important. Reboot both just to make sure everything is running the latest stuff.

On the RPi do the following:

sudo systemctl stop apache2
sudo mysqldump --all-databases > nextcloud.sql
sudo tar -cjvf nextcloud.tar.bz2 /etc/apache2 /etc/ssl /etc/letsencryptl /etc/php /var/www /home/<user>/nextcloud.sql
rsync nextcloud.tar.bz2 <user>@<VM>:/home/<user>/
# note: you may need to do step 1 and 2 on the VM to be able to transfer the archive through rsync

On the new VM, let’s do the following after you have created your user (refer to the original install for the adduser command and usermod and passwd and stuff):

sudo apt update
sudo apt install wget mariadb-server php php-apcu php-bcmath php-cli php-common php-curl php-gd php-gmp php-imagick php-intl php-mbstring php-mysql php-zip php-xml unzip openssh-server python3-certbot-apache
sudo mysql_secure_installation
# enter
# y
# set pass
# y
# y
# y
# y
sudo systemctl stop apache2
# ubuntu, why you always have to enable and start everything after they 
# get installed, it's ridiculous and arguably a security liability!

tar -xjvf nextcloud.tar.bz2
sudo mv /etc/ssl /etc/ssl-old
sudo mv /etc/apache2 /etc/apache2-old
sudo mv /etc/letsencrypt /etc/letsencrypt-old
sudo mv /etc/php /etc/php-old

sudo mv etc/* /etc/
sudo rm -rf /home/<user>/etc

sudo mv var/www/ubu-nc.local /var/www/
sudo rm -rf /home/<user>/var/

sudo mariadb < nextcloud.sql
sudo systemctl start apache2

# optional in case everything works:
sudo rm -rf /etc/*-old
sudo rm -rf /home/<user>/nextcloud*

Technically, now you should have nextcloud back up and running. But you may need to run:

sudo phpenmod bcmath gmp imagick intl
sudo a2enmod dir env headers mime rewrite ssl
sudo php /var/www/ubu-nc.local/occ db:add-missing-indices

In theory, those should already be applied, as we copied the settings from /etc, but I’m not sure if there’s anything else lying in /var/lib/. It’s no harm if you run the command again after those have been applied, it will just ensure those settings are enabled.


Given the nature of this post, I deducted that you have no backups. Here’s how you do backups of nextcloud using the classic sysadmin method (again, I don’t know if nextcloud has an auto-backup option, but I highly doubt it given how it’s configured as a simple website, and even if it did, I don’t think it would create backups of the SSL certificates):

sudo mysqldump --all-databases > nextcloud.sql
sudo tar -cjvf nextcloud.tar.bz2 /etc/apache2 /etc/ssl /etc/letsencryptl /etc/php /var/www /home/<user>/nextcloud.sql

Now, you can use the commands above to automate backups, maybe put them in a crontab or use systemd timers. Keep in mind that this just saves files locally, so it’s best if you mount a NFS or SSHFS path and do the backups to that mount point, so that in case of a catastrophic storage failure, you get an actual backup of your files and configs. Use the same steps to restore from backup. This should not have an RTO of more than 10 minutes without the rsync / copy of the archive and extract time.

1 Like

Note: in practice, you may be able to just systemctl stop mariadb and just rsync /etc/mariadb (or /etc/mysql or /etc/my.cnf or something like that) and /var/lib/mariadb (I think) and place it on the target server, but that is the very dirty way of doing backups. All you need is to run dumps. The best part of running mysqldump or pg_dump or anything alike is that your service won’t go down while you backup the DB. Well, for mysql at least if you are using InnoDB engine (I believe that is the default).

If you try to backup a running DB, you’re going going to have a bad time, as databases are sensitive and you can’t just rsync or archive a running DB, it won’t restore properly. So best is to just follow the dump part.

Another way to backup databases is to copy archive logs (basically all the SQL commands ran on the server), but that’s a story for another time.

This is awesome, thanks for this help. I am going to experiment with it now.

1 Like