[jcifs] jCIFS throughput: what to expect?

Michael B Allen ioplex at gmail.com
Thu Jun 12 18:29:50 GMT 2008


On 6/12/08, Laurent Millet <laurent.millet at airbus.com> wrote:
> Hello all,
>
>  This post is about performance, so the usual disclaimers apply :-)
>
>  I have been conducting basic tests to assess jCIFS performance against competing
>  implementations (Windows) and protocols (NFS). The tests involve reading from
>  and writing to the same NetApp filer, over CIFS (using java.io and jCIFS) and
>  NFS (java.io). Various buffer sizes are used (4kB to 1MB).
>
>  All tests were carried using the latest available version (1.2.21).
>
>  jCIFS parameters were not tuned (default values were kept). Only the buffer size
>  is changed; figures below correspond to the maximum that we get.
>
>  Here are a few results.
>
>  - jCIFS v. java.io on Windows (mapped network drive), 100Mb/s network:
>           read   write
>  Windows    8.7    6.5
>  jCIFS      7.2    4.8
>
>  - jCIFS v. java.io on Solaris (NFS mount), 1Gb/s network:
>           read   write
>  NFS        125     33
>  jCIFS       16      9
>
>  A good thing is that jCIFS compares well with the Windows implementation, as
>  performance is on par (I have read elsewhere on this forum that you can
>  sometimes even get better performance with jCIFS).
>
>  However I am a little surprised about two things:
>  - NFS performance is much better than with jCIFS
>  - overall, jCIFS throughput is somewhat low
>
>  Are those figures typical? What performance do you get in your environment?

Overall throughput is highly dependent on a lot of things. JCIFS does
what it can but in general it is going to be slower just because there
is a certain amount of mandatory OOP overhead and there are
limitations in the language (e.g. you have to copy a buffer at least
once because Java does not support arbitrary memory referencing).

I would play with the snd_buf_size and rcv_buf_size properties maybe
(see the Overview page of the API documentation). To find the optimal
values you would have to look at a capture and verify that the
SMB_COM_{READ,WRITE}_ANDX packets are actually carrying full payloads
(as opposed to every other packet having a fragment which would
definitely kill performance).

There's also a "raw" read/write patch in the patches directory but I
don't recall being clearly convinced that it actually made a
difference.

Another thing to look out for is to make sure that you don't have name
service timeouts. It is very easy to run JCIFS and have it sit there
for 6 seconds while it tries name service queries that fail. That
could skew your results. You might need to supply a WINS address or
remove WINS from the resolvOrder. Or better still, place your timer
only around the IO loop.

Mike

-- 
Michael B Allen
PHP Active Directory SPNEGO SSO
http://www.ioplex.com/


More information about the jcifs mailing list