Fragmentation on XFS

Rob Bosch robbosch at
Mon Feb 25 22:04:43 GMT 2008

The patch truncates the file with ftruncate if a transfer fails in
receiver.c.  This should avoid the problem you mention.  Even if this didn't
occur, the file would exist on the FS with the predefined size.  It would be
in the allocation table and exist on the disk (you can see it under Windows
explorer).  It wouldn't have data in the entire file size but it is still a
valid, if sparse, file.  

The writes in larger chunks won't fully solve the problem unless you have a
machine that does not do much concurrency.  My Windows machine using NTFS
experienced high fragmentation in ALL files, not just large ones.  The
server was receiving about 75 concurrent inbound rsync processes.  On NTFS I
used preallocate for all files.  This changed the throughput on our SATA
RAID array from a mere 10MBps to 80MBps bursting to 150MBps.  The only
change we made was the preallocate option and as our files became
defragmented due to updates we noticed the huge change in performance.  We
also never experienced any OS corruption or other issues with the NTFS

The reason we moved from Windows to Linux for our rsync machines was due to
scalability (CPU and memory usage) and reliability (we were getting crashes
due to driver problems under Windows).


More information about the rsync mailing list