[linux-cifs-client] Additional perf testing cifs vs. nfs vs. smbfs

Steve French smfrench at austin.rr.com
Fri Jul 29 22:18:17 GMT 2005


Tried various simple tests over the last few days with 100MB and 300MB 
files, large file copy from the local fs on the client to the server 
(Samba, nfsd and also Windows SMB and NFS server) and vice versa (copy 
from server to client).   Timing cp command mostly.   Tests were over 
100MB ethernet and loopback interface, server had a relatively slow disk.

cifs did well on these, but it will be a lot better when the 
cifs_writepages code is complete.

NFS will default to 32K operations, cifs 16K for read (4k for write), 
smbfs 4k

For large file read NFS had a slight edge, as much because of the groups 
of 3 to 4 reads it issues at one time as due to the slightly larger read 
size.   For write smbfs and cifs had an edge, when cifs is configured to 
use larger buffers (or forcedirectio) cifs wins easily.  smbfs was 
fractionally faster than cifs for write.  NFS's frequent commits slow it 
down a lot.

Connectathon test 5a (with repeated large file writes, followed by reads 
of the same file), cifs wins over nfs pretty easily and smbfs can not 
complete.  test 5b is not very interesting because the files stay in 
cache from the previous test5a and so the cifs, nfs are nearly local speed.

Where nfs will have a bigger edge on reads may be the case in which the 
server disk is extremely fast and there is lots of memory on the server 
to cache the file data - and the network efficiency is the most 
important consideration.

CIFS also beat nfs fairly easily on the 300MB file copy case to the 
Windows server (SFU for the NFS server), but it was more even on the 
read case (copy from server).

The systems aren't fast enough for gigabit to make much difference 
unfortunatley so I have to try a different server for the gigabit tests.


More information about the linux-cifs-client mailing list