[cifs-protocol] Re: [linux-cifs-client] Linux CIFS performance

Steve French smfrench at austin.rr.com
Wed Mar 7 22:02:07 GMT 2007

On Wed, 2007-03-07 at 13:28 -0800, Lam Hoang wrote:
> What is the best practical way to run/setup  iozone to compare CIFS/NFS
> performance between local drives and Netapp file mounted dirs ?
> Thanks

I am not the best person to ask about iozone (the nfs experts have made
this an art form - running iozone), but the particular two tests which
Shaggy's fix helps are the first two write tests (called test case 0 in
their cli).  I first let iozone pick its default mix of tests (-a).

 /opt/iozone/bin/iozone -a -s 20m -y 256 -R

I observed that over GigE with the server having a slow disk, nfs did
noticeably better on most of these than cifs but the gap closed a lot
when cifs ran with more current code.

1)I simply mounted an nfs drive to a Linux server (SLES10 in this case)
from client running somewhat newer kernel code (2.6.20)
2) ran the above, 
3) saved the results, 
4) umounted the nfs mount
5) mounted a cifs export (same server)
6) ran the above
7) saved the results
8) umounted cifs
9) removed cifs module
10) inserted a cifs module with shaggy's fix and more current code (what
is in 2.6.21-latest-rc)
11) mounted to same export
12) reran the above
13) saved the results
14) umounted cifs

15) repeated it a few times doing the same thing (only running tests
	/opt/iozone/bin/iozone -i 0 -s 20m -y 256 -R

The case with "dd" is interesting too.  The person who posted gave dd a
block size which was pretty small (1K?) IIRC.  Of course you want to do
writes that are at least page size (4K), but ideally a few megabytes (no
sense doing 1K writes over the network) would be even better (although
for cifs 56K per write over the wire is the best we can do without
directio mounts without minor code changes).

With the 1K block size - in my tests NFS coalesced the tiny 1K writes to
4K and dispatched from 2 to 4 in parallel - so there was much less dead
time on the wire (waiting for server write response etc.) than with cifs
and there were fewer requests with nfs.  

Although cifs improved a lot with  dd when using a larger block size,
NFS still did somewhat better than CIFS because:
1) NFS had more requests at one time on the wire, so did not have a lag
(equal to network round trip time) between write response and next write
2) NFS server seemed to process writes a bit faster than Samba (needs
more analysis, and we need to see if even newer Samba with CIFS doing
POSIX opens instead of NTCreateX would help)
3) NFS was hurt by commits but not enough to make a difference

CIFS should do much better if the dd command were parallelized.  I am
not sure how much mounting with "forcedirectio" would help cifs, but I
would expect it to help slightly.

> -----Original Message-----
> From: cifs-protocol-bounces+lam=synplicity.com at cifs.org
> [mailto:cifs-protocol-bounces+lam=synplicity.com at cifs.org] On Behalf Of Dave
> Kleikamp
> Sent: Wednesday, March 07, 2007 11:12 AM
> To: Steve French
> Cc: cifs-protocol at lists.samba.org; linux-cifs-client at lists.samba.org
> Subject: [cifs-protocol] Re: [linux-cifs-client] Linux CIFS performance
> On Wed, 2007-03-07 at 12:41 -0600, Steve French wrote:
> > After looking at a few posts complaining about Linux cifs performance
> > vs. nfs, Shaggy and I looked at some traces, and Shaggy spotted
> > something very important.   CIFS was rereading a page in one case in
> > which it did not need to, and not marking a page up to date in one place
> > (causing unnecessary i/o).   The result was a 20 to 30x improvement in
> > iozone write/rewrite performance - it does not affect read performance
> > or the other four write cases in iozone (or if so, not very much).
> > 
> > This is a spectacular improvement in write performance.  I doubt that it
> > would help the dd case mentioned in the earlier nfs related (since dd
> > opens the file write only, and typically uses block sizes which are too
> > small to be efficient).
> Did you try dd?  Of the two things my patch does, the part I was trying
> to fix was to address writes that were smaller than the page size and
> beyond the end of the file.  I expect that to help dd with the small
> block sizes.  The other fix for the bug I stumbled upon probably doesn't
> affect dd performance.
> I'll try to run a few tests myself.
> Shaggy

More information about the cifs-protocol mailing list