[cifs-protocol] Linux CIFS performance

Steve French smfrench at austin.rr.com
Wed Mar 7 18:41:38 GMT 2007


After looking at a few posts complaining about Linux cifs performance
vs. nfs, Shaggy and I looked at some traces, and Shaggy spotted
something very important.   CIFS was rereading a page in one case in
which it did not need to, and not marking a page up to date in one place
(causing unnecessary i/o).   The result was a 20 to 30x improvement in
iozone write/rewrite performance - it does not affect read performance
or the other four write cases in iozone (or if so, not very much).

This is a spectacular improvement in write performance.  I doubt that it
would help the dd case mentioned in the earlier nfs related (since dd
opens the file write only, and typically uses block sizes which are too
small to be efficient).

The general considerations in cifs vs. nfsv3 performance in Linux are
the following:

1) CIFS will cache (safely) singly open files.  Although NFSv3 (unlike
NFSv4 and CIFS does not have safe caching with an oplock like mechanism)
NFSv3 Linux client caches based on a timer, which helps a little but
prevents it from keeping data cached long enough to help certain random
i/o testcases - but it will cache writes (unlike cifs) for short periods
multiply open files.

2) From single threaded applications NFS gets more requests on the wire
than CIFS (CIFS may have slightly less serialization than NFS in the
Linux implementation - CIFS sends writes from a single writepages call
(on a single inode) in order - while NFS will send in parallel).  This
is a huge help to NFS in cases like the dd example since CIFS does not
keep the network busy enough.

3) NFS network i/o size seems to fall back to 4K more than it should,
CIFS normally uses 56K for write, 16K for read.  This should help cifs a
little, but NFS gets more read and write requests on the wire.  NFSv3
can be more easily configured for larger i/o sizes than cifs though in
relatively recent kernels.

4) NFS sync interval - NFS syncs every few megabytes which slows writes
down in a few places

5) Chattiness:  Although NFS has the frequent access call which slows
performance a little - cifs s doing more SetPathInfo for cases in which
attributes are set a lot (should be much improved in 2.6.21) and cifs
also had the case in which files open rw (not w) would do extra
readpages (now fixed in 2.6.21rc).  I am hopeful that implementing the
new POSIX setattr will reduce the number of setattr calls - and the new
POSIX open will reduce the overhead of cifs open a lot (no extra setattr
on create and mkdir).

In general I am seeing NFS much faster on read from single threaded apps
compared to cifs over GigE, but it varies a lot on write now depending
on many factors (io size, sync interval etc.)



More information about the cifs-protocol mailing list