how to migrate 40T data and 180M files

Michal Suchanek hramrach at
Tue Aug 11 02:58:15 MDT 2009

2009/8/11 Jan-Benedict Glaw <jbglaw at>:
> On Tue, 2009-08-11 16:14:33 +0800, Ming Gao <gaomingcn at> wrote:
>> I need to migrate 40T data and 180M files from one storage device to another
>> one, both source and destination will be NFS and mounted to a local suse
>> linux box.
>> The first question is that if there is any risk for such a big number of
>> files? should I divide them into groups and rsync them in parallel or in
>> serial? If yes, how many groups is better?
>> The second question is about memory. How much memory should I install to the
>> linux box? The rsync FAQ( says one file
>> will use 100 bytes to store relevant information, so 180M files will use
>> about 18G memory. How much memory should be installed totally?
>> And any other thing I could do to reduce the risk?
> There are no specific risks these days I think, but it sounds like
> this is a task like "copy over anything", so it's merely mount both
> filesystems and use two `tar' instances with a pipe in between...

.. except it will take forever and is not restartable.

I'm not sure that the possibility to restart the rsync run after, say,
a network outage is really an advantage, though. Finding out what's
already transferred and transferring the rest takes about the same
time as doing a full transfer.



More information about the rsync mailing list