Parallelizing rsync through multiple ssh connections

SERVANT Cyril cyril.servant at cea.fr
Thu Nov 4 15:58:03 UTC 2021


Hi, I want to increase the speed of rsync transfers over ssh.


1. The need

TL;DR: we need to transfer one huge file quickly (100Gb/s) through ssh.

I'm working at CEA (Alternative Energies and Atomic Energy Commission) in
France. We have a compute cluster complex, and our customers regularly need to
transfer big files from and to the cluster. The bandwidth between the customers
and us is generally between 10Gb/s and 100Gb/s. The file system used is
LustreFS, and can handle more than 100Gb/s read and write. One security
constraint is the use of ssh for every connection.

In order to maximize transfers speed, I first tried different Ciphers / MACs.
The list of authorized Ciphers / MACs is provided to me by our security team.
With these constraints, I can reach 1Gb/s to 3Gb/s. I'm still far from the
expected result. This is due to the way encryption/decryption work on modern
CPUs: they are really efficient thanks to AES-NI, but are single-threaded. The
bandwidth limiter is the speed of a single CPU core.

So the next step is: just use parallel or xargs with rsync. And it works like a
charm in most cases. But not in compute clusters case. As I said earlier, the
files are stored in LustreFS. The good practice for this file system is to
create really few files, but big files. And with the way compute clusters work,
you generally end with one really big file, often hundreds of Gigabytes, or
even Terabytes.


2. What has been done

I created parallel-sftp (https://github.com/cea-hpc/openssh-portable/). It is
just a fork of openssh's sftp, which creates multiple ssh connections for a
single transfer. This way, parallelization is really simple : files are
transferred in parallel just like the parallel/xargs solution. And big files
are transferred by chunks directly into the destination file (created as a
sparse file). One big advantage of this solution is that it doesn't require any
server change. All the parallelization is made on the client side.

However, there are 2 caveats. There is no consistency check of the copied
files. And an interrupted transfer must be restarted from scratch, because
there is no way to exactly know which chunks of a big file are transferred.


3. Is rsync the best solution?

Now I'm thinking that adding parallelization to rsync is the best solution. We
could take advantage of the delta-transfer algorithm in order to just transfer
parts of a file. I can imagine a first rsync connection taking care of
detecting the diffs between local and distant files, and then forking (or
creating threads) for the actual transfers. The development work could be split
in two parts :
- adding the possibility to transfer part of a file (from byte x to byte y).
- adding the possibility to delegate the transfers to other threads /
  processes.

What do you think about this? Does it look feasible? If I develop it, does it
have a chance to be merged upstream? I understand it's kind of a niche use
case, but I know it's a frequent need in the super-computing world.

One important thing to note is that at CEA we have the manpower and will to
develop this functionality. We are also open to sponsoring, for development
and/or reviews.


Thank you,
-- 
Cyril



More information about the rsync mailing list