Core dump - Can not sync big data folders of size 800 GB

Wayne Davison wayned at samba.org
Wed Jul 14 22:19:28 GMT 2004


On Mon, Jul 12, 2004 at 07:48:27PM +0800, Prasad wrote:
> Can anyone tell what is the limit of data size that rsync can work?

If you run "rsync --version" it will tell you how many bits it is using
for file sizes (e.g. 64-bit).  However, there is a bug that was just
found and fixed in CVS that affects large-file support in 2.6.2.  You
can try out the fix by applying the attached patch.

> Moreover, when I tried previously with data sizes of over 50 - 100
> GB, I see that rsync takes too much time to build the file list on
> source host. Is this a limitation of rsync or server's resources?

Yes, this is one of the limitations of rsync with large data sets -- the
pre-scan of the source (and an extra scan of the destination when the
--delete option is used) takes a long time and eats up a lot of memory.

> Are there any other faster / high performance tools available for
> synchronizing larger data sizes?

I don't know of any off-hand.  This is an area that future development
is going to be targetted at, so it should improve down the road.

..wayne..
-------------- next part --------------
--- generator.c	13 Jul 2004 01:45:51 -0000	1.95
+++ generator.c	14 Jul 2004 16:40:08 -0000	1.96
@@ -205,7 +205,7 @@ static void sum_sizes_sqroot(struct sum_
  *
  * Generate approximately one checksum every block_len bytes.
  */
-static void generate_and_send_sums(struct map_struct *buf, size_t len, int f_out)
+static void generate_and_send_sums(struct map_struct *buf, OFF_T len, int f_out)
 {
 	size_t i;
 	struct sum_struct sum;


More information about the rsync mailing list