Re[jcifs] ading files using SmbFileInputStream

aviv avivf at
Mon Jun 15 15:45:00 GMT 2009

Michael B Allen wrote:
> On Mon, Jun 15, 2009 at 11:01 AM, aviv<avivf at> wrote:
>> Hello, I've been using jcifs for quite (version 1.3.9) a while and must
>> say
>> I'm impressed. However, I've noticed that transferring files from my
>> server
>> to my local machine takes a very long time. Here is the code I use to
>> transfer the file:
>> InputStream in = new SmbFileInputStream(remoteFile);
>> OutputStream out = new FileOutputStream(localFile);
>> byte[] buffer = new byte[16904];
>> int read = 0;
>> while ((read = > 0)
>>    out.write(buffer, 0, read);
>> in.close();
>> out.close();
>> Now, this can take about 40 seconds for a 1.2MB file. When copying the
>> file
>> using Windows Explorer, however, it only takes about 10 seconds. I
>> decided
>> to try and find out what the difference was. Using Wireshark I noticed
>> that
>> in the ReadAndXRequests issued from Windows Explorer each read was 61440
>> bytes of length, while those issued from jcifs were 4356 bytes
>> consistently.
>> Upon further investigation, I saw that in SmbFileInputStream the requests
>> are sent with a maximal length of transport.server.maxBufferSize, which
>> apparently in my case is 4356. I've tried changing the
>> jcifs.smb.client.rcv_buf_size but since the request sizes are bound to
>> server.maxBufferSize, it didn't help.
>> To sum it up, why would the maxBufferSize be different for jcifs as
>> opposed
>> to Windows Explorer? Is it possible to take maxBufferSize as a
>> recommendation and send requests with larger sizes?
> The most important factor in transfer performance is to put as much
> data into each packet as possible.  So larger buffer sizes do not
> necessarily improve transfer speed. In fact they can decrease
> performance if for example every other packet contains only a small
> fragment of the stream.
> In general JCIFS is pretty fast but even 10 seconds to transfer a 1.2
> MB file is pretty slow so I have to wonder if your network just has a
> really high latency and that has a significant impact because JCIFS is
> exchanging more packets in the transfer.
> There are multiple buffer settings you can play with. You should be
> able to match the buffer settings used by Windows Explorer in which
> case you should get similar performance. You have to analyze WireShark
> and fiddle with writeSize, snd_buf_size and rcv_buf_size until you get
> similar results to the Windows Explorer capture.
> But in general, unless performance in a particular environment is
> really critical, you shouldn't mess with these settings because you'll
> make it fast in one case and slow in all the other cases.
> Mike
> -- 
> Michael B Allen
> Java Active Directory Integration

The latency is pretty high, which is exactly why I want to minimize the
number of requests made (and just make each request larger). I did play with
snd_buf_size and rcv_buf_size but it did not work since the length in the
request sent in capped to the value of transport.server.maxBufferSize which
I cannot change, or am I missing something? Here's a snippet from
SmbFileInputStream on which I am basing my claim:

      int r, n;
       do {
            r = len > readSize ? readSize : len; //This is the line which
caps the size in the request

            if( file.log.level >= 4 )
                file.log.println( "read: len=" + len + ",r=" + r + ",fp=" +
fp );

            try {
SmbComReadAndX request = new SmbComReadAndX( file.fid, fp, r, null );
if( file.type == SmbFile.TYPE_NAMED_PIPE ) {
    request.minCount = request.maxCount = request.remaining = 1024;
                file.send( request, response );

P.S - 
I'm using the current configuration:
    Config.setProperty("jcifs.smb.client.snd_buf_size", "60416");
    Config.setProperty("jcifs.smb.client.rcv_buf_size", "60416");
to no effect.

View this message in context:
Sent from the Samba - jcifs mailing list archive at

More information about the jcifs mailing list