[jcifs] Re: jCIFS throughput: what to expect?

Laurent Millet laurent.millet at airbus.com
Fri Jun 13 09:59:26 GMT 2008

Michael B Allen <ioplex <at> gmail.com> writes:
> Overall throughput is highly dependent on a lot of things. JCIFS does
> what it can but in general it is going to be slower just because there
> is a certain amount of mandatory OOP overhead and there are

Yes, I was told about that. Good to hear you confirm it  :-)

> limitations in the language (e.g. you have to copy a buffer at least
> once because Java does not support arbitrary memory referencing).

Would it be possible to use java.nio features to circumvent that limitation?
Anyway, this also holds true for java.io, so the buffer copy happens for jCIFS
as well as NFS/Windows mapped drive.

> I would play with the snd_buf_size and rcv_buf_size properties maybe
> (see the Overview page of the API documentation). To find the optimal
> values you would have to look at a capture and verify that the
> SMB_COM_{READ,WRITE}_ANDX packets are actually carrying full payloads
> (as opposed to every other packet having a fragment which would
> definitely kill performance).

If the buffer size used to read data in Java code is big enough compared to
[snd|rcv]_buf_size, I assume most payloads will be full and the last one a
fragment. Is this correct?
Anyway, I will check the packets on the wire.

> There's also a "raw" read/write patch in the patches directory but I
> don't recall being clearly convinced that it actually made a
> difference.
> Another thing to look out for is to make sure that you don't have name
> service timeouts. It is very easy to run JCIFS and have it sit there
> for 6 seconds while it tries name service queries that fail. That
> could skew your results. You might need to supply a WINS address or
> remove WINS from the resolvOrder. Or better still, place your timer
> only around the IO loop.

I do get timeouts, which should be WINS ones(I haven't specified anything
WINS-related); I'll try making the changes you mention.
However the timer is already right around the core I/O loop.



More information about the jcifs mailing list