Backups made easy: btrfs + snapper + snapborg
Backups made easy: btrfs + snapper + snapborg
Backups made easy: btrfs + snapper + snapborg
you most likely need a backup of your /bin/,/lib/, /usr/ and /etc/ folders, because they contain files, essential for your system operation.
I guess we disagree about the point of backups then.
For me, backups are about data, as in, if I switch operating systems or something, it's what I'll need to get set up. /bin and /lib will just cause more problems than they solve since they likely won't be compatible w/ whatever system libraries are there. /usr is only relevant for the config files, and 90% of those are at whatever the distribution's defaults are (and on Arch, everything is in /usr anyway). /etc is useful for configs, but like /usr/etc, 90% of it is just defaults.
If I'm restoring from backup, I don't want to have random breakage, like config files, libraries, and binaries being incompatible. Most of the stuff outside /home is irrelevant, and the parts I do care about can be manually backed up through improving processes.
For example, I stick my self-hosted stuff into a git repo, and any changes I make get committed. I have scripts that deploy the changes wherever they need to go (e.g. I use quadlet, so this means systemd files), and those can be tweaked if something changes in a different OS. 99% of the things I host are containerized, so there is zero system-level impact (i.e. something that can't easily live in git or /home). For the rest, I largely accept that I'll probably have to fix a few things when restoring after an emergency , such as any configs for system services (e.g. /etc/sub(uid|gid), podman tweaks, users and groups, etc). I try to keep these changes extremely limited, and I keep notes in the README of my git repo, but it's never going to be 100% accurate since I often just want to get whatever it is working and promise that I'll update my docs later.
That said, the most important thing you can do, IMO, is test restoring from backup. I'm moving my systems toward containerization, which gives me a chance to fix all the stuff I totally forgot about. I keep the old system around for a while to reference (on an old drive or something), pulling files (and properly backing them up) as I go, and then I'll nuke it some months later. The important data is all in /home, which gets backed up, so it's mostly about service configuration (which I can look up later).
snapper
Snapper absolutely rocks. I'm spoiled on the openSUSE line of distributions that this is baked in. When an upgrade does sideways on my Tumbleweed desktop, I simply roll back and retry the upgrade in a couple days. It's magical.
snapborg
Looks really cool! I won't be backing up most of /, but I'll instead be a bit selective about which subvolumes I use. I already use btrfs
everywhere, so this should be relatively drop-in.
I guess we disagree about the point of backups then.
We just use different threat models.)
For me, the main threat is disk failure, so I want to get new disk, restore system from backup and continue as if nothing happened.
Surely, if your hardware or OS configuration changes, you should not backup /usr
, /etc
and other folders.
However, the proposed workflow could be adapted to both scenarios: a single snapborg
config backs up snapshots from a single subvolume, so I, actually, use two configs: one for /home
excluding /home/.home_unbacked
and another one for /
excluding /var
and some other directories.
This two configs have different backup schedule and different retention policies, so in case of hardware/OS change, I'll just restore only /home
backup without restoring /
.
Makes sense.
I'm more interested in cutting off-site backup costs, so my NAS has RAID mirror to reduce chance of total failure, and offsite backup only stores important data. I don't even backup the bulk of it (ripped movies and whatnot), just the important data.
Restore from backup looks like this for my NAS:
Personal devices are similar, but installing packages is manual (perhaps I'll backup my explicitly stored package list or something to speed it up a little). Setup takes longer than your method, but I think it's worth the reduced storage costs since I've never actually needed to do it and a few hours of downtime is totally fine for me.
Dude. Awesome blog.
Thanks. :3
I don't really understand the advantage of backing up the whole btrfs volume.
Snapshots are made atomically, so this workflow allows you to separate snapshot creation from actual backing up.
As subvolumes are dynamically sized, you can create as many subvols as you like and backup those, that need it.
It's most useful for something like /home so you have a full backup of your home directories without having to worry if some app has dumped its settings on a folder you don't have marked for backup. But also backing up / is useful because it gives you an easy recovery in the event of say disk failure. Just restore the entire system from a backup.
I use borg with borgmatic. I just back up / (which includes home) and exclude some folders I don't want (like /mnt or /tmp).
It does the same as you just said.
I have 20 borg snapshots of my nearly full 1tb drive which takes about 400gb of space on my NAS.
I do it at the file structure level, not at the block device level as the article suggests. Why would I want to back it up at the block device level instead?
btrbk
handles to automated generation of snapshots, as well as sending/receiving to another drive for backup. What this workflow accomplish that btrbk
doesn't do on its own? Compression?
btrbk
requires, that destination disk is also formatted as btrfs.
I didn't want to have such constrain.