rsync script for snapshot backups

Dennis Steinkamp dennis at lightandshadow.tv
Sun Jun 19 12:22:10 UTC 2016


Hey guys,

i tried to create a simple rsync script that should create daily backups 
from a ZFS storage and put them into a timestamp folder.
After creating the initial full backup, the following backups should 
only contain "new data" and the rest will be referenced via hardlinks 
(-link-dest)

This was at least a simple enough scenario to achieve it with my 
pathetic scripting skills. This is what i came up with:

#!/bin/sh

# rsync copy script for rsync pull from FreeNAS to BackupNAS for Buero 
dataset

# Set variables
EXPIRED=`date +"%d-%m-%Y" -d "14 days ago"`

# Copy previous timefile to timeold.txt if it exists
if [ -f "/volume1/rsync/Buero/timenow.txt" ]
then
     yes | cp /volume1/rsync/Buero/timenow.txt 
/volume1/rsync/Buero/timeold.txt
fi
# Create current timefile
     echo `date +"%d-%m-%Y-%H%M"` > /volume1/rsync/Buero/timenow.txt
# rsync command
if [ -f "/volume1/rsync/Buero/timeold.txt" ]
then
     rsync -aqzh \
     --delete --stats --exclude-from=/volume1/rsync/Buero/exclude.txt \
     --log-file=/volume1/Backup_Test/logs/rsync-`date 
+"%d-%m-%Y-%H%M"`.log \
     --link-dest=/volume1/Backup_Test/`cat 
/volume1/rsync/Buero/timeold.txt` \
Test at 192.168.2.2::Test/volume1/Backup_Test/`date +"%d-%m-%Y-%H%M"`
else
     rsync -aqzh \
     --delete --stats --exclude-from=/volume1/rsync/Buero/exclude.txt \
     --log-file=/volume1/Backup_Buero/logs/rsync-`date 
+"%d-%m-%Y-%H%M"`.log \
Test at 192.168.2.2::Test/volume1/Backup_Test/`date +"%d-%m-%Y-%H%M"`
fi

# Delete expired snapshots (2 weeks old)
if [ -d /volume1/Backup_Buero/$EXPIRED-* ]
then
rm -Rf /volume1/Backup_Buero/$EXPIRED-*
fi

Well, it works but there is a huge flaw with his approach and i am not 
able to solve it on my own unfortunately.
As long as the backups are finishing properly, everything is fine but as 
soon as one backup job couldn`t be finished for some reason, (like it 
will be aborted accidently or a power cut occurs)
the whole backup chain is messed up and usually the script creates a new 
full backup which fills up my backup storage.

What i would like to achieve is, to improve the script so that a backup 
run that wasn`t finished properly will be resumed, next time the script 
triggers.
Only if that was successful should the next incremental backup be 
created so that the files that didn`t changed from the previous backup 
can be hardlinked properly.

I did a little bit of research and i am not sure if i am on the right 
track here but apparently this can be done with return codes, but i 
honestly don`t know how to do this.
Thank you in advance for your help and sorry if this question may seem 
foolish to most of you people.

Regards

Dennis








-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.samba.org/pipermail/rsync/attachments/20160619/ca565622/attachment.html>


More information about the rsync mailing list