Recycling directories and backup performance. Was: Re: rsync --link-dest won't link even if existing file is out of date (fwd)

Robert Bell Robert.Bell at csiro.au
Wed Apr 15 20:57:23 MDT 2015


rsync folks,

Henri Shustak <henri.shustak at gmail.com> wrote:
> LBackup always starts a new backup snapshot with an empty directory. I
> have been looking at extending --link-dest options to scan beyond just
> the previous successful backup to (failed backups / older backups).
> However, there are all kinds of edge cases which are worth considering
> with such a changes. At present LBackup is focused on reliability as
> such, this R&D is quite slow given limited resources. The current
> version of LBackup offers IMHO reliable backups of user data and the
> scripting sub-system offers a high degree of flxibility.
We recycle directories in our backup scheme, and on tests it is 3
to 6 times faster than creating a new directory tree and then deleting
an old one.  Your timing will be different - the speed depends on the relative
numbers of files and directories I'd imagine.

Half our recycled directories are 5 to 6 days old, and our churn rate
is typically only 0.5% of files and 1% of data each day. So, the
recycled directory is usually about 95% right.


Our backup procudures have provision for looking back at previous
directories, but there is not much to be gained with recycled
directories.  Without recycling, and after a failure, the latest
available backup may not have much in it, and won't be a good place to
link-dest from - you need to go further back, as Henry is considering.

> Yes, every time you start a backup snapshot, a directory is
> re-populated from scratch and this takes time with LBackup. However,
> if you are seeking reliability then you may wish to check out the
> following URL : http://www.lbackup.org

We rarely have a failure with our backups.   If we do, our procedure
just re-labels the unfinished directory, and re-syncs as normal on the
next attempt.

And, in another post, Henri Shustak <henri.shustak at gmail.com> gave
good advice on splitting up backups, etc to get performance, for someone
whose original post I didn't find.

> Ill take a look but I imagine I cant backup the 80 Million files I
> need to in under the 5 hours i have for nightly maintenance/backups.
> Currently it's possible by recycling directories...


Here are some performance figures from our backup yesterday.  (We have
multiple streams to several filesystems).

The backups completed in 41 minutes.
They transferred:
       80533 files out of      18127734 files available (0.4%)
57871616693 bytes out of 6827716377557 bytes available (0.8%)
  - so, with this low churn rate, we could backup 80 million files in
    about 3 hours.

Our backup target filesystems include SSD and are managed by SGI's DMF.

Hope this helps.

Rob.

Dr Robert C. Bell
HPC National Partnerships | Scientific Computing
Information Management and Technology
CSIRO
T +61 3 9669 8102 Alt +61 3 8601 3810 Mob +61 428 108 333
Robert.Bell at csiro.au<mailto:Robert.Bell at csiro.au> | www.csiro.au | wiki.csiro.au/display/ASC/
Street: CSIRO ASC Level 11, 700 Collins Street, Docklands Vic 3008, Australia
Postal: CSIRO ASC Level 11, GPO Box 1289, Melbourne Vic 3001, Australia

PLEASE NOTE
The information contained in this email may be confidential or privileged.
Any unauthorised use or disclosure is prohibited.  If you have received
this email in error, please delete it immediately and notify the sender by
return email. Thank you.  To the extent permitted by law, CSIRO does not
represent, warrant and/or guarantee that the integrity of this
communication has been maintained or that the communication is free of
errors, virus, interception or interference.

Please consider the environment before printing this email.


More information about the rsync mailing list