[jcifs] max_xmit - why 4280?

Del Merritt del at alum.mit.edu
Fri Mar 12 11:48:47 MST 2010


On 03/12/2010 12:05 PM, Michael B Allen wrote:
> On Fri, Mar 12, 2010 at 11:19 AM, Del Merritt <del at alum.mit.edu> wrote:
>   
>> On 03/12/2010 10:15 AM, Michael B Allen wrote:
>>
>> On Thu, Mar 11, 2010 at 5:46 PM, Del Merritt <del at alum.mit.edu> wrote:
>>
>>
>> On 03/11/2010 05:28 PM, Michael B Allen wrote:
>>
>>
>> On Thu, Mar 11, 2010 at 12:14 PM, Del Merritt <del at alum.mit.edu> wrote:
>>
>>
>> A question was raised a while back about getting a DcerpcException with the
>> message "Fragmented request PDUs currently not supported":
>>
>> http://article.gmane.org/gmane.network.samba.java/6745/match=fragmented+request+pdus
>>
>> I am seeing this too, and it appears that the values for max_xmit and
>> max_recv are hardcoded to 4280 in jcifs/dcerpc/DcerpcHandle.java; I'm using
>> JCIFS 1.3.14 as a baseline.
>>
>> Is there a reason for this limit?  Is it something that should be negotiated
>> and just hasn't happened yet?  I'm still learning the ins and outs of CIFS
>> as I am porting JCIFS to a J2ME environment.
>>
>>
>>
>> Hi Del,
>>
>> Those are just observed values.
>>
>>
>> Sorry - would you elaborate?
>>
>>
>> The values for max_xmit and max_recv are values that Windows clients
>> have been observed using. Windows is as Windows does. So we just copy
>> what Windows does and you should too.
>>
>>
>> Fair 'nuf.
>>
>> Note that JCIFS supports fragmented response PDUs. It just doesn't
>> support fragmented *request* PDUs. It's somewhat unusual for a DCERPC
>> request to contain a lot of data. Usually it's the response that is
>> big.
>>
>>
>>
>> I am trying to send a large chunks of data (that when assembled will be
>> a "file") to another system that I discover via JCIFS.  To minimize
>> overhead - mostly in the form of time spent - on the sending host, I'm
>> trying to make those chunks as big as is reasonable.
>>
>>
>> I think you're confused. The DCERPC layer has nothing to do with
>> sending files.
>>
>> I probably am somewhat confused.  What I am trying to do is to discover
>> systems with printers and then send data - a "file" - to a printer in what
>> is considered "RAW" mode.
>>
>> I do the send-the-data step currently thus:
>>
>> get a DcerpcHandle to the server's printer
>> loop sending the data with a class that extends DcerpcMessage in 4Kish
>> chunks
>>
>> The chunk size is dependent on max_xmit since I am using the DCERPC layer.
>> Sorry that's greatly simplified/reduced.  To replicate the code here would
>> require lots of classes.  At the protocol level - e.g., what wireshark shows
>> - I'm basically doing:
>>
>> a DCERPC bind to the host's \PIPE\spoolss
>> a SPOOLSS\OpenPrinter
>> a SPOOLSS\StartDocPrinter
>> a bunch of SPOOLSS/WritePrinter calls, and finally, when the data is
>> drained,
>> closing things down when it's all done
>>
>> What would you recommend as an alternative?
>>     
> Hi Del,
>
> So you *are* using DCERPC.
>   

Ayup.  That's why I was asking about that low level area.  And I am
still learning, so I don't like to AssUMe I am correct at first
introduction ;-)

> However unfortunately changing those values will not help because
> there is an upper limit of 64K.
>   

Yes, I'm aware there are other limits - like the "static final byte[]
BUF" with it's 64K limit.  In other news, I'm still not completely happy
with the synchronization on BUF, so I made it a class variable in my
local copy.  But that's a different story.

> As I stated before, JCIFS can handle fragmented response PDUs but not
> fragmented *request* PDUs. Meaning it can read multiple chunks from
> the DCERPC service but in your case, you want to send fragmented
> request PDUs. JCIFS simply does not have the logic to handle that. It
> can only send one request PDU and a print job will not fit in that.
> Not even close.
>
> Unfortunately you're basically SOL because to fix this would require
> modifying DcerpcHandle.sendrecv and if *I* didn't do it that means
> it's probably not trivial (but I don't recall why I didn't do it).
>
> If you're really bent on getting this to work, you're going to need to
> get a capture of a print job being submitted and study it carefully in
> WireShark and modify DcerpcHandle.sendrecv to handle fragmented
> request PDUs. Unfortunately that's about the extent of what I can help
> you with. As much as I would like to see JCIFS have this working, I
> just don't have the time.
>   

Mike, don't fret.  I am not asking you to change anything.  What it
tells me is that it's a semi-bug, and that if I were to implement a
change that Feels Good, I could hand it back to you for your consideration.

A problem with any software source is that it isn't always clear when
the developer did something for a Good Reason or Just Because.  :-)

Thanks,
-Del

p.s. - I am still chasing a hang situation; it is not clear if it's the
underlying TCP support from the OS or the vendor-supplied socket
libraries or a real issue with thread synchronization in JCIFS.  If you
have any extra hints on JCIFS-specific debugging tricks in this area,
I'm all ears.


More information about the jCIFS mailing list