[linux-cifs-client] directio cifs more than doubled loopback file copy (write) perf

Steve French smfltc at us.ibm.com
Thu Dec 2 19:20:33 GMT 2004


On Thu, 2004-12-02 at 12:17, Jeremy Allison wrote:
> On Thu, Dec 02, 2004 at 12:06:22PM -0600, Steve French wrote:
> > Trying direct io in loopback network connection (cifs and samba on the
> > same box).
> > 
> > The time for dd if=/dev/zero of=/mnt/testfile bs=256K count=200
> > 
> > went from more than 1.3 seconds on average without directio to 0.5
> > seconds with mount -o directio
> > 
> > I did have a minor bug in the code to config it (I had marked an ifdef
> > CIFS_EXPERIMENTAL instead of CONFIG_CIFS_EXPERIMENTAL) and will fix that
> > and try to get the performance up a bit higher by eliminating a copy and
> > sending more than 16K on writes.
> 
> In SVN I've fixed the LARGE_WRITEX issue so you should be able to
> issue 128k writes.
> 
> Jeremy.
> 

Thanks - I will give that a shot.  I was looking at this LARGE_WRITEX
code in svn for Samba 3 and it looks like it should work (now if I just
figure out if there any weird semantics to LARGE_WRITEX, and clone
SendRcv and smb_send and cifs_write routines to go gather sends, we can
really try some fun things).

As expected even larger writes do help - a lot. See below:

4K writes through page cache - 			1.3  seconds 38MB/sec
(4K writes direct are about the same as above)

16K writes (mount -o direct)			0.5 sec	    100MB/sec seconds	100MB/sec

60K writes (change #define CIFS_MAX_MSGSIZE	0.35 sec    143MB/sec
     on client and add "max xmit = 62000 in
     smb.conf on the server, and mount direct)

I can probably do a lot better than that.  Also note that this is just a
laptop, not a high end server, and that the amount of data copied would
fit in is small and likely is fitting in the (server) page cache without
JFS having to hit the disk much (larger sizes would hit the disk a lot
more obviously).

I suspect that direct i/o will help a lot with client memory pressure as
well since less inode data caching will be going on. 




More information about the linux-cifs-client mailing list