RAM speedup

Matthias Schniedermeyer ms at citd.de
Sun Jun 28 19:09:53 UTC 2020


On 28.06.2020 16:46, Rupert Gallagher wrote:
> ????????????????????? Original Message ?????????????????????
> On Sunday 28 June 2020 13:58, Matthias Schniedermeyer <ms at citd.de> wrote:
> 
> > On 27.06.2020 11:22, Rupert Gallagher via rsync wrote:
> >
> > > On Friday 26 June 2020 21:58, Rupert Gallagher via rsync rsync at lists.samba.org wrote:
> > >
> > > > Hello,
> > > > As disks are slow and rsync reads and writes so much that for the bus this is the equivalent of context switching galore, would it be possible to use RAM as a buffer? Say, you have 10GB of spare RAM, rsync uses the bus to its peak for reading 10GB, then again for writing it down. This would be more efficient than lot of small read/write operations.
> > > > Thank you
> > >
> > > Current task: rsync 752 GB
> > > source disk
> > > Writing speed: 77 MB/s
> > > Reading speed: 97 MB/s
> > > target disk
> > > Writing speed: 117 MB/s
> > > Reading speed: 99 MB/s
> > > Actual time: 380 min (6.3 hours) to copy 648 GB
> > > Actual speed: 28 MB/s (648/380 = 1.7 GB min =~ 1700MB min / 60 min = 28MB sec)
> >
> > Unfortunatly you left out every other detail.
> 
> 
> > Complete rsync commandline?
> 
> /usr/local/bin/rsync --recursive --links --times --modify-window=1 --devices --specials --update --owner --group --perms --delete --delete-before --delete-excluded --exclude-from=/etc/excluded_from_backup.conf --numeric-ids --outbuf=Block --inplace --link-dest=/backup/latest/ /archive /backup

Linkdest means "more metadata-operations".

This is a hardlinked backup-store?

With or without deletion of older backups?

What is the age of that backup store?
Hardlink-farms age a filesystem pretty severely, IOW after some time the freespace gets heavyly fragmented. IOW the HDD has to seek like hell to piece the Meta-data & file-content into many small holes.

Personally i only use hardlink-farms on SSDs nowadays, HDDs "don't really like" hardlink-farms.
 
> > What hardware? (From the numbers it is only clear that you seem to talk about HDDs.)
> > What HDDs?
> 
> source:
> ST2000NX0403  sata hdd
> Writing speed      : 117 MB/s
> Reading speed      : 99 MB/s
> 
> destination:
> ST5000LM000-2AN1 sata hdd
> Writing speed      : 74 MB/s
> Reading speed      : 89 MB/s
> 
> > What computer? (Laptop? Desktop? Server? Raspberry Pi? Age?)
> 
> Supermicro A2SDi-4C-HLN4F, newish

That mainboard has a Intel Atom C3558 soldered to it. That's a 2017 Atom with 2,2 Ghz.

I have no personal experience with Atom CPUs, so i can only generically say: "not exactly build for speed".

> > What "Buses"? ( a) Any modern "bus" is NOT saturated by those numbers. b) All modern "buses" (Except USB, to some degree) are P2P, you can't even connect 2 devices to the same bus. (Except USB, but there are usually several controllers so you don't have to use same bus).)
> 
> Supermicro CSE-M14TQC 4xSAS/SATA bay, connected with a CBL-SAST-0616(50cm) Mini-SAS HD to 4 SATA cable. The CSE receives the 4 sata cables, the mini-sas end is plugged on the main board.

AFAICT each HDD is in effect connected to a separate channel, so no contention there.

> > With or without networking involved?
> 
> no network involved
> 
> > What Filesystem? What mount-options?
> 
> FFS2

This mean either you are using a Flash Filesystem for a HDD, which would be "odd".
Or you are using a BSD-Type OS. I would guess FreeBSD?

In both cases: No personal experience.

I mainly use Linux Systems with XFS as a filesystem. I personally hadn't had a problem saturating most storages for more than a decade.
But i also use seperate Storage-types with different content. "Low AVG filesize"-files i put on SSDs. And HDDs only get used for files with a largish AVG filesize, mostly more than 10MB per file.
And i also use "rsync --preallocate", so large files are stored "as contiguosly as possible".

> > AVG Filesize? Directory structure? Fragmentation?
> 
> mixed

That is what average means:
Total number of files divided by total filesize. You have already determined the total file size.
Now you only need to: $(find /source -type f | wc -l)

Any given set of files has an AVG.






-- 

Matthias



More information about the rsync mailing list