[jcifs] max_xmit - why 4280?

Del Merritt del at alum.mit.edu
Fri Mar 12 09:19:00 MST 2010


On 03/12/2010 10:15 AM, Michael B Allen wrote:
> On Thu, Mar 11, 2010 at 5:46 PM, Del Merritt <del at alum.mit.edu> wrote:
>   
>> On 03/11/2010 05:28 PM, Michael B Allen wrote:
>>     
>>> On Thu, Mar 11, 2010 at 12:14 PM, Del Merritt <del at alum.mit.edu> wrote:
>>>       
>>>> A question was raised a while back about getting a DcerpcException with the
>>>> message "Fragmented request PDUs currently not supported":
>>>>
>>>> http://article.gmane.org/gmane.network.samba.java/6745/match=fragmented+request+pdus
>>>>
>>>> I am seeing this too, and it appears that the values for max_xmit and
>>>> max_recv are hardcoded to 4280 in jcifs/dcerpc/DcerpcHandle.java; I'm using
>>>> JCIFS 1.3.14 as a baseline.
>>>>
>>>> Is there a reason for this limit?  Is it something that should be negotiated
>>>> and just hasn't happened yet?  I'm still learning the ins and outs of CIFS
>>>> as I am porting JCIFS to a J2ME environment.
>>>>
>>>>         
>>> Hi Del,
>>>
>>> Those are just observed values.
>>>       
>> Sorry - would you elaborate?
>>     
> The values for max_xmit and max_recv are values that Windows clients
> have been observed using. Windows is as Windows does. So we just copy
> what Windows does and you should too.
>   

Fair 'nuf.

>>> Note that JCIFS supports fragmented response PDUs. It just doesn't
>>> support fragmented *request* PDUs. It's somewhat unusual for a DCERPC
>>> request to contain a lot of data. Usually it's the response that is
>>> big.
>>>
>>>       
>> I am trying to send a large chunks of data (that when assembled will be
>> a "file") to another system that I discover via JCIFS.  To minimize
>> overhead - mostly in the form of time spent - on the sending host, I'm
>> trying to make those chunks as big as is reasonable.
>>     
> I think you're confused. The DCERPC layer has nothing to do with
> sending files. 


I probably am somewhat confused.  What I am trying to do is to discover
systems with printers and then send data - a "file" - to a printer in
what is considered "RAW" mode.

I do the send-the-data step currently thus:

   1. get a DcerpcHandle to the server's printer
   2. loop sending the data with a class that extends DcerpcMessage in
      4Kish chunks

The chunk size is dependent on max_xmit since I am using the DCERPC
layer. Sorry that's greatly simplified/reduced.  To replicate the code
here would require lots of classes.  At the protocol level - e.g., what
wireshark shows - I'm basically doing:

   1. a DCERPC bind to the host's \PIPE\spoolss
   2. a SPOOLSS\OpenPrinter
   3. a SPOOLSS\StartDocPrinter
   4. a bunch of SPOOLSS/WritePrinter calls, and finally, when the data
      is drained,
   5. closing things down when it's all done

What would you recommend as an alternative? 

Thanks,
-Del

> The size of file stream writes is somewhat controlled
> by the jcifs.smb.client.snd_buf_size and jcifs.smb.client.rcv_buf_size
> properties. But you really don't want to mess with those values. Even
> if you see an increase in performance by increasing the size of those
> values, it is very possible that in another environment the result
> could be a significant decrease in overall throughput. And in general,
> you really should not change values of properties if you're not sure
> about what they do.
>
> Mike
>
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.samba.org/pipermail/jcifs/attachments/20100312/861dba1e/attachment.html>


More information about the jCIFS mailing list