Sending Big ZFS Snapshots over SSH

2019/05/19

I’m back to rsync.net as I mentioned already.

I’m glad that I use rsync.net with ZFS because ZFS is all around me. ZFS is on the NAS, ZFS is on my external Lacie drive and I would even use ZFS on boot on my Macs if there were a simple way to install it. Two decades ago I would sacrifize every minute of my spare time to get done something like this but since I’m older now I like simple things and minimalization.

Right after I got the zpool on rsync.net, I started to send a big ZFS snapshot. And the first issue was here. For snapshot management and replication I use Jim Salter’s excellent tool Sanoid and syncoid is part of it. Syncoid is a ZFS send/receive tool I use on rsync.net. Syncoid establishes a SSH connection to my remote server and here was my issue: after every few hours I got a Broken pipe from SSH which practically means that the SSH connection was lost. After tweeting about my issue with Jim Salter I set up a cron job that would start a syncoid process every six hours. Remember that FreeBSD 12 I use on the NAS and FreeBSD 11.2 running on rsync.net support ZFS resume. So what the cron job did was pretty simple: the job would start a syncoid job at midnight, at six in the morning, at noon and at six in the evening and if a previous cron job would still running, the new syncoid job would not start (ZFS detects a sending in progress). But if the syncoid process would be already interrupted because of a lost connection, the job would start syncoid with resuming the sending.

Speaking of Jim Salter, check his blog JRS Systems and a very popular post about using ZFS mirrors instead of RAIDZ (I’m a big fan of ZFS mirrors; would never use RAIDZ; BTW, rsync.net uses RAIDZ-3). If you are interested in ZFS, you can listen to Jim Salter in episode 401 and episode 402 of TechSNAP where he speaks about ZFS and his Sanoid/syncoid tool among other topics.