link-dest and batch-file

Kevin Korb kmk at sanitarium.net
Wed Jul 18 19:53:27 UTC 2018


If you are using ZFS then forget --link-dest.  Just rsync to the same
zfs mount every time and do a zfs snapshot after the rsync finishes.
Then delete old backups with a zfs destroy.

On 07/18/2018 03:42 PM, Дугин Сергей via rsync wrote:
> Hello.
> 
> I  need  that  during  today's backup, the metadata about the files is
> saved  in  a  file,  so  that tomorrow when creating a backup with the
> option  "link-dest" instead of this option I would specify a file with
> metadata,   then   rsync   will  not  scan  the  folder  specified  in
> "link-dest",   but  simply  reads  information  about this folder from
> a  file with  metadata. This greatly saves time and load on the server
> with backups. 
> 
> I  do not delete through rm -rf, but delete the ZFS partition, you can
> also delete via find -delete, there are other ways
> 
> On 26 июня 2018 г., 22:47:56:
> 
>> I don't believe there is anything you can do with the batch options for
>> this.  If you added a --write-batch to each of those you would get 3
>> batch files that wouldn't be read without a --read-batch.  If you also
>> did a --read-batch that would contain differences between a backup and
>> the backup before it so rsync would still have to read the backup before
>> it to understand the batch (and this would continue on to the oldest
>> backup making the problem worse).
> 
>> Anyway, what you were asking for sounds a lot like rdiff-backup.  I
>> didn't like it myself but maybe you would.
> 
>> BTW, my experience with many millions of files vs rsync --link-dest is
>> that running the backup isn't a problem.  The problem came when it was
>> time to delete the oldest backup.  An rm -rf took a lot longer than an
>> rsync.  If you haven't gotten there yet maybe you should try one and see
>> if it is going to be as big a problem as I had.
> 
>> On 06/26/2018 03:02 PM, Дугин Сергей via rsync wrote:
>>> Hello.
>>>
>>> I am launching a cron bash script that does the following:
>>>
>>> Day 1
>>> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/0000009/2018-06-25 root at 192.168.1.103:/home/ /home/backuper/.BACKUP/0000009/2018-06-26
>>>
>>> Day 2
>>> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/0000009/2018-06-26 root at 192.168.1.103:/home/ /home/backuper/.BACKUP/0000009/2018-06-27
>>>
>>> Day 3
>>> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/0000009/2018-06-27 root at 192.168.1.103:/home/ /home/backuper/.BACKUP/0000009/2018-06-28
>>>
>>> and etc.
>>>
>>>
>>> The  backup server experiences a large flow of data when the quantity
>>> of  files  exceeds  millions, as rsync scans the files of the previous
>>> day   because   of  the  link-dest  option.  Is it possible to use the
>>> batch-file   mechanism   in   such  a  way,  so  that  when  using the
>>> link-dest  option,  the  file  with  the metadata from the current day
>>> could  the  executed  the  following  day  without  having to scan the
>>> folder, that is linked in the link-dest?
>>>
>>>
>>> Yours faithfully,
>>>   Sergey Dugin                       mailto:drug at qwarta.ru
>>>  QWARTA
>>>
>>>
> 
> 
> 
> 

-- 
~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
	Kevin Korb			Phone:    (407) 252-6853
	Systems Administrator		Internet:
	FutureQuest, Inc.		Kevin at FutureQuest.net  (work)
	Orlando, Florida		kmk at sanitarium.net (personal)
	Web page:			https://sanitarium.net/
	PGP public key available on web site.
~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: OpenPGP digital signature
URL: <http://lists.samba.org/pipermail/rsync/attachments/20180718/1bd1e1cd/signature.sig>


More information about the rsync mailing list