How make a complete backup

I have an email server on a ubuntu 22.04 server. Its installed using the iredmail script. Its time to think about backup. So how do I best do a total backup of the server to be able to rebuild it if it crasch. The iredmail script take care of the db dumps every day at the same time so my idea is to make a total backup when the sql dumps are done. I want to use rsync to do it if possible. How do I build a script to do this using cron and what folders dont have to be backupt

Just an addon. Is Timeshift an alternative?

IDK about iredmail, but if it dumps the DB somewhere, all you need to do is backup the mails themselves, maybe the server program (if it’s a tarball install) and the dumps.

Then you install using the same script (if it’s not a tarball), restore the DB and slap the mails back in the same location. Sometimes you might be able to get away with putting everything in the same place and then deleting the corrupted DB and importing the one from backups.

To do it that way I am sure will create a lot of work extra. F.ex no tls certificates will come over and has to be reinstalled. The iredscript install postfix, dovecot, roundcube, and many other parts on a fresh server so I am not sure it will work. It would be better to be able to backup the complete server. I read about Timeshift using rsync, can it be a alternativ. The vps provider has a snapshot feature that is similar that works flawless so something like that would be nice

Ok, if you’re using a VPS, it’s going to be difficult. If you can control the OS and can boot a live ISO, it should be doable. Otherwise, you’re kinda SOL with your current VPS. At the very least, you should be able to mount this VM’s disk on another VM, in order to read the rootfs offline (when the OS on it is not running).

If you can do that, then stop your mail server and DB services and run rsync of “/” with the exception of /dev, /proc and /sys, then at the end start the service again. Obviously you won’t be receiving any emails during this time, unless you have 2 mail servers.

If you can run iredmail in a LXC / Incus container, you could do something like stop the container, take a BTRFS snapshot, start the container right back up (less downtime than plain rsync) and rsync the contents of the container. That way, if the main OS breaks, you only need to set up LXC / Incus again and copy the contents on the new server. And this makes your setup portable between VPSes (even if you’re unable to boot live ISOs).

Not as powerful as zfs, but should get the job done in the restricted confinements of VPSes (zfs-send is fantastic). Usually you can only deploy images they (the VPSes) support and have no access to VM controls other than power off, power on and force reboot.

The problem with backing up the rootfs of a working VM is that, to recover, the OS inside the VM needs to be offline. That’s why a live ISO or another VM with the original VM’s mounted disk is necessary. With LXC, you don’t need that, as you’re working with a virtual OS inside the main OS.

You have to strike a balance between backup simplicity, recovery simplicity, downtime and automation. IMO, the best thing you could do is automate your backup and recovery. See what you actually need and only back those things up, then when you restore, you first delete the same files if they exist on the target, then copy in reverse. Taking notes of every folder that’s needed, like /etc/ssl, /opt/mail-server, /mnt/mail-data, /mnt/db-data, /mnt/db-dumps and so on.

The ideal scenario is that you don’t backup more than you need. The OS is reproducible, you shouldn’t need to have to back it up. If you want to skip automation, backing up the OS data with the mail server and DB stopped is the laziest thing you can do and makes recovery easy, if you can meet the prerequisites, but it’s highly inefficient.

Well I assumed that the vps was a LXC container. I know the provider run Proxmox but I am not skilled enough to know.
So the best solution would be to get an agreement with the hosting provider to be able to schedule snapshots. I have used them quite often during the 10 years I have used the provider and they never fail
Anyway I need to find a solution to secure the customers emails

If your VPS is giving you LXC containers and not a full VM, they’re scamming you (unless they have a lower price for the containers). Proxmox has good backup features, but unless you take the VM snapshot when the DB is shut down, you won’t have a consistent backup (i.e. your db is unlikely to be able to start up, unless you use postgres and have write-ahead logs enabled).

The ideal option is to backup what you need and automate your restores. That way, just like with LXC containers, but better, you’ll be VPS independent.

I made a deep dive in the iredmail documentation and found out that if I backup the var folder I get the sql dumps and the mails. I also have to backup the dkim folder to se the same dkim keys to avoid change them in the dns. But that seem to be all so I will try that. First manual and then using a cron script. You always can learn something new
Thanks for input

Me again!
I have tested a backup script and it works 90%.
I read thru different tutorials and also asked ChatGPT and after that I came up with this part
RSYNC_OPTS=“-avz --delete”
the delete should take avay files from the destination that is not in the source. But the problem is that when all is synced all destination files are deleted
So what do I do wrong

I’d use:
RSYNC_OPTS=“aroglAX --delete”

Keep in mind “–delete” doesn’t do wildcards. What’s the source and destination folders look like?

I publish the wlove script here. SSH is secured with a keypair so I dont think its a security problem to publish


Directories to backup


Remote server details


Rsync options

RSYNC_OPTS=“-avz --delete”

Loop through each directory and sync it to the remote server

for DIR in “${DIRS[@]}”; do

Print completion message

echo “Backup completed successfully to $REMOTE_HOST:$REMOTE_DIR”

I discovered the slashes after the directories so hey are gone now in the script but it act the same

There’s your problem. Don’t loop when you use --delete, unless you have a specific destination for each folder.

RSYNC_OPTS="-arolgAX --delete"

rsync ${RSYNC_OPTS} --include="/var/vmail/backup" --include="/var/vmail/vmail1" --include "/var/lib/dkim" --exclude="*" / ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_DIR}

Don’t add the “/” at the end of rsync source folder. You might also want to include some /etc stuff.

Also, rsync will just sync the data, but you’ll not get a guaranteed backup. I believe you mentioned this is a production box. I wouldn’t mess with rsync just like that.

You can use rsync for backups, but you must keep more than 1 version in order to count (what if he remembers he deleted a mail from a week ago, that ended up being important?). Add something to the REMOTE_DIR, like the date.

Something like

DATE="$(date +%Y-%m-%d)"

And use this format to paste code, it’s hard to read otherwise.

the mails is saved per date by the system. Alao the sql dumps is saved per date. So as long as I dont delete any backups every day is saved individually
What backup method should you recommend.

I tested your changes and get this now

hostname contains invalid characters
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(231) [sender=3.2.7]

If the mails are dumped in something like a tar by the system, along the sql dumps, then you should be fine, as long as the dated archives don’t get deleted at the destination (i.e. no --delete flag).

I prefer the “pull” method, it’s more secure by design. You schedule your backup on the backup server. If your mail server is compromised, then that thing has root SSH access to your backup server, to also wipe your backups.

If the backup server can pull the data from the mail server, then if the mail server gets compromised (there’s more chances of vulnerabilities being found in mail servers than plain ssh), it won’t have access to the backups. The backup server getting compromised is less likely, generally speaking (because usually, there’s not a lot of things running on it, i.e. less attack surface).

I only used zfs-snapshot and zfs-send as a backup method, doing a bunch of tar archives and restic. There’s a guide on restic on IntermitTech youtube channel (4 parts). You don’t need an s3 storage for restic, you can use NFS if it’s local.

Restic is nice, because it has deduplication, but I don’t know if it works with tar archives anyway (at least not compressed tars, that’s for sure, uncompressed ones might be fine).

My backup strategy recommendation is to have a folder with the date of the backup for each backup taken and on the backup server have a cron script that deletes the top level folders that are older than N days (like say, 90 days). That way, you’ll keep 90 days of backups on the server (if the storage allows it).

Remember to also test that the backups work, by doing a test restore from time to time.

I made an edit, I missed a “$” before {REMOTE_HOST}.

Ok Thanks for your advice, its appreciated. It will be a production server, now it just contain my mail accounts. I will take you up on the pull. Thanks again

This maybe the worst answer here, but I backup my server with CloneZillia. I have a standard maint. window

1 Like