I replaced the Unifi router with the new pfsense router. I was off to a rough start. The Labeled LAN and WAN ports were the reverse of the default software configuration. Not necessarily a show stopper but it certainly slowed things down When I tried to boot the pfsense with a screen and keyboard, the keyboard would not work. Turns out the board powers up the USB port too late for the keyboard to come on before the OS tried to recognize USB devices so it reported an error. Adding an externally powered USB hub between the keyboard and Protectli solved the problem and I was able to debug and set up the device.
During this round of refinement. I decided to rebuild each machine from scratch with Ansible. Everything worked well except the issue of persistent data. Several of the machines have a bit of data which I would like to treat as persistent between rebuilds. An example is /var/lib/unifi/backup/autobackup
My initial guess is to use something like rsync to sync those directories to a consistent location on my file server… But, I just learned rsync so I have a tendency to rsync all the things.
There are a number of options, depending on how your file server, storage device and VM’s are hosted. It will also depend on the the origin OS (Windows or Linux) .
Another consideration is, are there circumstances where multiple VM’s will be accessing the same data at the same time. In other words, do you need to preserve states between writes.
There’s nothing wrong with rsync, its very good at what it does, and cuts down on a lot of bandwidth usage. I use it to sync big csv files to an NFS share. It works great for that and a good many other tasks.
You can look at a number of options:
NAS server, and add shares to it (NFS, iSCSI, SMB).
AutoFS, NFS, SSH-FS to another server.
SFTP from one box to another (use a simple script to sync it up, even a cron job would suffice).
SCP, it’s manual, but, can also be a kicked off with a cron job.
As you mantioned rsync, you can rsync over SSH easy enough, and could push it with cron also.
Those are just some of the more common methods.
Probably the easiest (least amount of user interaction / scripting) is a combination of AutoFS and either SSH-FS or NFS. I think you can auto-mount NFS in fstab, so you may not even need AutoFS.
Another complete answer! I should have been more specific in my question.
I often use NFS and SMB for storing input and output data on the file server where the VM is often just a ‘thing’ to do the processing. I need to look into AutoFS. I don’t understand the pros and cons between keeping a remote mount constantly open versus open and closing it on demand.
In the situation, I am currently looking at I have a few configuration files, ‘progress tracking files’, or local backups that I would like to remain within the VM, but be able to retain a remote copy for reinstallation when the machine is rebuilt.
In those situations SFTP, SCP and rsync seem similar in implementation:
Create a script to copy the file or files from the VM to the storage location.
Create a cron job to trigger the script at some interval.
Copy the file or files from the storage location to the VM at system rebuild.
Manually, this seems like a pain in the butt. With ansible and some well-chosen variables, it should be pretty straightforward.
I would recommend rsync if you want to keep the local and remote directories sync’d. Rsync will only copy changed files whereas sFTP and scp will complete everything every time. In this case, rsync would be much more efficient, and faster.