Direct I/O support (patches included)

Dag Wieers dag at wieers.com
Sun Feb 17 07:13:14 MST 2013


On Sat, 16 Feb 2013, Linda Walsh wrote:

> I wondered about that as well -- could speed things
> up by 30% over going through the slow linux buffers.
>
> One thing that the 'dd' people found out though was
> that if you do direct I/O, memory and your I/O really
> do have to line up -- it may be that only 512 byte alignment
> is necessary (or 4096 on some newer disks)...but ideally,
> you look at the stat's claim for write size since the last
> param in stat isn't the allocation size, but the smallest
> optimal write size -- i.e. the "stripe size" if you have
> a RAID...as there, you want to write whole strips at once,
> otherwise you get into a read/modify/write cycle that slows
> down your disk I/O with 200% overhead -- *ouch!*...

True, the patch can be improved. But even without alignment this avoids 
excessive buffering when transferring huge files on systems with a lot of 
free memory. The behavior I noticed (and this patch fixes) is only reads 
until the buffer is filled, and then only writes until the buffers have 
been written. With direct-io you have reads and writes happening at the 
same time.

Since it seems you know what is needed to improve, can you propose a patch ?

(I got some hints from iozone wrt. alignment and portability)

Another solution is fadvise(), although I still had the behavior mentioned 
above using --drop-cache, so it didn't fix my use-case which is why I 
wrote this patch.

Kind regards,
-- 
-- dag wieers, dag at wieers.com, http://dag.wieers.com/
-- dagit linux solutions, info at dagit.net, http://dagit.net/

[Any errors in spelling, tact or fact are transmission errors]


More information about the rsync mailing list