[linux-cifs-client] directio cifs more than doubled loopback file copy (write) perf

Steve French smfltc at us.ibm.com
Thu Dec 2 19:45:31 GMT 2004


As another comparison point, a set of local runs against ext3 partition
on the same box were at least 10% slower than JFS, not counting a few
ext3 runs in which ext3 was -- much, much -- slower (over 5 seconds,
probably something else hitting the disk in the background causing the
anomaly).

I also reran the test repeatedly with double the file size (count=400),
multiple times for each way, with similar results (0.29 local jfs, 0.32
local ext3, 0.8 seconds cifs direct with 60K bufs, amd the old way was
more than 2.5 seconds ie w/o direct and using 4K writes.  

If I increase the test filesize much more ie beyond about 200MB on this
system I hit the disk too much which causes wide variation in the
results from run to run depending on when disk activity occurs.

On Thu, 2004-12-02 at 13:20, Steve French wrote:
> On Thu, 2004-12-02 at 12:17, Jeremy Allison wrote:
> > On Thu, Dec 02, 2004 at 12:06:22PM -0600, Steve French wrote:
> > > Trying direct io in loopback network connection (cifs and samba on the
> > > same box).
> > > 
> > > The time for dd if=/dev/zero of=/mnt/testfile bs=256K count=200
> > > 
> > > went from more than 1.3 seconds on average without directio to 0.5
> > > seconds with mount -o directio
> > > 
> > > I did have a minor bug in the code to config it (I had marked an ifdef
> > > CIFS_EXPERIMENTAL instead of CONFIG_CIFS_EXPERIMENTAL) and will fix that
> > > and try to get the performance up a bit higher by eliminating a copy and
> > > sending more than 16K on writes.
> > 
> > In SVN I've fixed the LARGE_WRITEX issue so you should be able to
> > issue 128k writes.
> > 
> > Jeremy.
> > 
> 
> Thanks - I will give that a shot.  I was looking at this LARGE_WRITEX
> code in svn for Samba 3 and it looks like it should work (now if I just
> figure out if there any weird semantics to LARGE_WRITEX, and clone
> SendRcv and smb_send and cifs_write routines to go gather sends, we can
> really try some fun things).
> 
> As expected even larger writes do help - a lot. See below:
> 
> 4K writes through page cache - 			1.3  seconds 38MB/sec
> (4K writes direct are about the same as above)
> 
> 16K writes (mount -o direct)			0.5 sec	    100MB/sec seconds	100MB/sec
> 
> 60K writes (change #define CIFS_MAX_MSGSIZE	0.35 sec    143MB/sec
>      on client and add "max xmit = 62000 in
>      smb.conf on the server, and mount direct)
> 
> I can probably do a lot better than that.  Also note that this is just a
> laptop, not a high end server, and that the amount of data copied would
> fit in is small and likely is fitting in the (server) page cache without
> JFS having to hit the disk much (larger sizes would hit the disk a lot
> more obviously).
> 
> I suspect that direct i/o will help a lot with client memory pressure as
> well since less inode data caching will be going on. 
> 



More information about the linux-cifs-client mailing list