rsync 3.1.1 (and HEAD) grinds to a halt over sshfs

Greg Bell gbell_spamless at yahoo.com
Mon Aug 1 11:06:01 UTC 2016


Hi Devs and others,

First, many thanks for 14+ years of personal usage rsync.

I'm trying to rsync to an sshfs mount point instead of using ssh 
transport.  The reason for this is that I'm using encfs to create 
files which land on the (untrusted) destination encrypted.

After witnessing stalled rsync progress, a barely utilized internet 
connection and and idle CPU, I stripped this down to a testcase that's 
just an rsync to an sshfs mount point.

That testcase floods the connection for a bit, but when it gets to a 
dir with lots (2500) of small files, it slows to a crawl.  Barely any 
bandwidth is being used, and barely any CPU.

strace shows that the writes are infrequent, and in between it's doing 
lstat's, chmod's and such.  The rsync process lstat'ing the 
destination seems to block the writes?

Some strace output here:
http://pastebin.com/fKcMjKEz

Command line is:

rsync -rv --size-only --no-group --no-owner --no-perms --no-times 
--whole-file --inplace --exclude-from SOMEFILE  /data /mnt/remote

With those options, I think I've succeeded in minimizing the number of 
lstats and chmods it has to do over the ssh connection. (For 
reference, a simpler rsync -rv has poor performance too.)

I understand rsync is running its destination process on the local 
machine, and then doing file I/O over sshfs.  I understand why that 
might be slow, however a plain cp floods the connection.

Is there anything else I can do to optimize (minimize) the I/O so 
rsync is fast over sshfs?  --outbuf?  Batch mode?

I wonder if the rsync processes are fighting/blocking too much, rather 
than just letting large blocks of writes happen?


Regards,
Greg






More information about the rsync mailing list