lastes sources don't include "drop_cache" option

Linda Walsh rsync at tlinx.org
Wed Dec 4 01:13:27 MST 2013


On 12/3/2013 6:01 AM, Holger Hoffstaette wrote:
> On Mon, 02 Dec 2013 00:30:58 -0800, Linda A. Walsh wrote:
>
>> Was there some reason that patch got dropped?
>
> AFAIK it was never applied in the first place. 
----
It was in opensuse up to and including their latest release opensuse 13.1.


> I disagree with the
> reasoning, but whatever. This just came up this morning on Gentoo:
> https://bugs.gentoo.org/show_bug.cgi?id=475408
----
What reasoning? 

To me it's a no-brainer.  I'm doing an update from a 7.3T partition
w/6.3T used, to another of similar size, both on the same system.
 
I have considerably less than 1T of memory, so if it doesn't severely
police it's cache usage, it could easily easily excise anything else
in cache scores of times over.


>
> ..and as I looked for solutions that don't require the patch I found:
> https://github.com/Feh/nocache
>
> This seems to work and is useful to other applications as well, if you can
> live with the naive assumption that buffer cache behaviour is global to an
> entire application/command.
=====
    That would depend on each application.  Only an application can know
what data is on it's last use and can be safely marked as no longer needed
without performance implications. 

    I tried the direct-io option/patch(?), that died very quickly as
the buffers have to line up on sector boundaries and must be full sectors
(i.e. 4K would be a safe figure to use with today's disks).  I don't
know what happens in the latest kernel on the last sector.  I know the
author of 'dd' had to put in special code to do the last sector w/o
DIO or it would fail on some platforms.

    Apparently, the old linux kernel (or glib interface, not exactly sure
which), checked for non-alignment and compensated by doing non-DIO
behind the scenes.  Starting about 2-3 years ago, they got rid of some
of (all?)  of the compensation code, putting the responsibility on apps
to get it right.

    Given that, I stayed with the drop-cache option as, at least it
was safer.

    With rsync, it's, with the patch I saw, it was a bit more complex
than I'd expect, but then rsync tries to reimplement "mmap" in software
which seems like the start of the complexity problem.  IMO, it would
have been best to stay with mmap, and trap any error signals from
files that have been truncated while the mmap is done.  The error handler
does necessary fix-ups to signal EOF gracefully to a reader, so it
doesn't need to re-invent the wheel.

    With that simplification, a local sync of one partition w/another
could become a mmap of the source & mark 1read, and write that to a
mmap output file -- pos-advised to not be needed after writing.

    Of course if direct I/O worked, you could  still have a place
cache dropping. On reads, it is possible some of the stuff is already
in memory, on writes, though, you can never really regain ~15% perf
hit that going through the cache adds (actually can be much higher
than that for high throughput applications -- have seen slowdowns,
meet and exceed a 60-80% slowdown (multi-GB copies, for example).

    I can try the nocache prog, but given it's blunt nature, not sure
how well it will perform.






More information about the rsync mailing list