(--delay-updates and --partial) re-hashing the already downloaded files?
haqthat at gmail.com
Thu Apr 18 08:45:41 MDT 2013
I am, doesn't matter if I use --partial-dir or not.
If I do use --parital-dir I am basically (in theory) just using a different
name than the default .~tmp~ dir that rsync gives you by for partially
I thought I had it figured out by not using --partial-dir (it was me in IRC
you were talking to, btw hello again), but that was my error, using it, not
using it, it's still the same.
On Wed, Apr 17, 2013 at 3:43 PM, Kevin Korb <kmk at sanitarium.net> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> Are you also using --partial-dir ?
> I discussed this problem with someone recently in #rsync and IIRC the
> solution was to not use --partial-dir
> On 04/17/13 16:40, John Pierman wrote:
> > I am backing up a LARGE data set, over a very unstable internet
> > connection.
> > I NEED to --delay-updates, because I do a flash cut-over once
> > everything has transferred. Yes I know --copy-dest does this, but
> > if the connection breaks, anything that's already made it over gets
> > put into place (not what I want), I need it to go from beginning to
> > end, And then I handle the files seperately.
> > If I use --partial and --delay-updates together (I get the desired
> > effect, and all is well)......EXCEPT for.....
> > Let's say I have 10k files I start my rsync and get through 2k
> > files and the internet connection is lost.
> > Everything is fine, my partial files are safe, nothing has been
> > moved into place yet, I'm happy.
> > So I restart rsync (well a script does once the internet comes
> > back). Here is where I get unwanted behavior. This rsync run LOOKS
> > like it is starting from the beginning (transferring those first 2k
> > files INSANELY fast if you look at --progress), which I realize
> > it's not really TRANSFERRING anything but it's CHECKING all my
> > partial files that haven't been put into place yet.
> > The problem is, my connection is dog slow and unreliable (I can't
> > change this). If the connection drops frequently I'm basically
> > spending too much time (re-hashing?) those files I already have,
> > and not picking up where I left off.
> > I'm sure there's something I can do to get the desired behavior of
> > picking up where I left off without rehashing the first 2k files??
> > Help?
> - --
> Kevin Korb Phone: (407) 252-6853
> Systems Administrator Internet:
> FutureQuest, Inc. Kevin at FutureQuest.net (work)
> Orlando, Florida kmk at sanitarium.net (personal)
> Web page: http://www.sanitarium.net/
> PGP public key available on web site.
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.19 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> -----END PGP SIGNATURE-----
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rsync