[jcifs] max_xmit - why 4280?

Del Merritt del at alum.mit.edu
Fri Apr 30 08:53:45 MDT 2010


[Reviving an old thread; I ask your patience.  Picking up the
conversation inline below.]

On 03/12/2010 01:48 PM, Del Merritt wrote:
> On 03/12/2010 12:05 PM, Michael B Allen wrote:
>   
>> On Fri, Mar 12, 2010 at 11:19 AM, Del Merritt <del at alum.mit.edu> wrote:
>>   
>>     
>>> On 03/12/2010 10:15 AM, Michael B Allen wrote:
>>>
>>> On Thu, Mar 11, 2010 at 5:46 PM, Del Merritt <del at alum.mit.edu> wrote:
>>>
>>>
>>> On 03/11/2010 05:28 PM, Michael B Allen wrote:
>>>
>>>
>>> On Thu, Mar 11, 2010 at 12:14 PM, Del Merritt <del at alum.mit.edu> wrote:
>>>
>>>
>>> A question was raised a while back about getting a DcerpcException with the
>>> message "Fragmented request PDUs currently not supported":
>>>
>>> http://article.gmane.org/gmane.network.samba.java/6745/match=fragmented+request+pdus
>>>
>>> I am seeing this too, and it appears that the values for max_xmit and
>>> max_recv are hardcoded to 4280 in jcifs/dcerpc/DcerpcHandle.java; I'm using
>>> JCIFS 1.3.14 as a baseline.
>>>
>>> Is there a reason for this limit?  Is it something that should be negotiated
>>> and just hasn't happened yet?  I'm still learning the ins and outs of CIFS
>>> as I am porting JCIFS to a J2ME environment.
>>>
>>>
>>>
>>> Hi Del,
>>>
>>> Those are just observed values.
>>>
>>>
>>> Sorry - would you elaborate?
>>>
>>>
>>> The values for max_xmit and max_recv are values that Windows clients
>>> have been observed using. Windows is as Windows does. So we just copy
>>> what Windows does and you should too.
>>>
>>>
>>> Fair 'nuf.
>>>
>>> Note that JCIFS supports fragmented response PDUs. It just doesn't
>>> support fragmented *request* PDUs. It's somewhat unusual for a DCERPC
>>> request to contain a lot of data. Usually it's the response that is
>>> big.
>>>
>>>
>>>
>>> I am trying to send a large chunks of data (that when assembled will be
>>> a "file") to another system that I discover via JCIFS.  To minimize
>>> overhead - mostly in the form of time spent - on the sending host, I'm
>>> trying to make those chunks as big as is reasonable.
>>>
>>>
>>> I think you're confused. The DCERPC layer has nothing to do with
>>> sending files.
>>>
>>> I probably am somewhat confused.  What I am trying to do is to discover
>>> systems with printers and then send data - a "file" - to a printer in what
>>> is considered "RAW" mode.
>>>
>>> I do the send-the-data step currently thus:
>>>
>>> get a DcerpcHandle to the server's printer
>>> loop sending the data with a class that extends DcerpcMessage in 4Kish
>>> chunks
>>>
>>> The chunk size is dependent on max_xmit since I am using the DCERPC layer.
>>> Sorry that's greatly simplified/reduced.  To replicate the code here would
>>> require lots of classes.  At the protocol level - e.g., what wireshark shows
>>> - I'm basically doing:
>>>
>>> a DCERPC bind to the host's \PIPE\spoolss
>>> a SPOOLSS\OpenPrinter
>>> a SPOOLSS\StartDocPrinter
>>> a bunch of SPOOLSS/WritePrinter calls, and finally, when the data is
>>> drained,
>>> closing things down when it's all done
>>>
>>> What would you recommend as an alternative?
>>>     
>>>       
>> Hi Del,
>>
>> So you *are* using DCERPC.
>>   
>>     
> Ayup.  That's why I was asking about that low level area.  And I am
> still learning, so I don't like to AssUMe I am correct at first
> introduction ;-)
>
>   
>> However unfortunately changing those values will not help because
>> there is an upper limit of 64K.
>>   
>>     
> Yes, I'm aware there are other limits - like the "static final byte[]
> BUF" with it's 64K limit.  In other news, I'm still not completely happy
> with the synchronization on BUF, so I made it a class variable in my
> local copy.  But that's a different story.
>
>   
>> As I stated before, JCIFS can handle fragmented response PDUs but not
>> fragmented *request* PDUs. Meaning it can read multiple chunks from
>> the DCERPC service but in your case, you want to send fragmented
>> request PDUs. JCIFS simply does not have the logic to handle that. It
>> can only send one request PDU and a print job will not fit in that.
>> Not even close.
>>
>> Unfortunately you're basically SOL because to fix this would require
>> modifying DcerpcHandle.sendrecv and if *I* didn't do it that means
>> it's probably not trivial (but I don't recall why I didn't do it).
>>
>> If you're really bent on getting this to work, you're going to need to
>> get a capture of a print job being submitted and study it carefully in
>> WireShark and modify DcerpcHandle.sendrecv to handle fragmented
>> request PDUs. Unfortunately that's about the extent of what I can help
>> you with. As much as I would like to see JCIFS have this working, I
>> just don't have the time.
>>   
>>     
>   

Mike et al. -

I've dug in and it's looks to me that there is a significant performance
win to using AndX blocks to send large chunks of data.  A printer opened
via OpenPrinterEx() from a Windows system subsequently writes data in
64KB chunks using "Write AndX".  I believe this is a performance win
because the remote system itself is allowed to fill up its own local
buffer and then do a "slow" transaction with a larger chunk of data. 
The amount of data going over the wire is the same, but since heavy
processing is done only after 64KB is sent (vs. about 4KB with normal
Write) there is a net improvement.

I'm looking for some architectural advice; if I feed my changes back,
I'd like for there to be a chance you'd be willing to take them with
minimal fuss. 

I'm currently looking at two possible implementations:

   1. Add a DcerpcAndXHandle class, homologous to DcerpcPipeHandle, and
      figuring the right way to use it when I want to write to the
      target that was opened in my version of OpenPrinterEx().
   2. Simply having my WritePrinter() use a locally-opened
      SmbFileOutputStream and building the DcerpcMessage fragments on
      the fly.


In both cases, OpenPrinterEx and WritePrinter are themselves classes
outside the jCIFS package itself.  

I don't like option #2 as much since it doesn't seem general, but #1
seems to have a broader scope and code impact if it is to play/integrate
well in jCIFS as I sort-of understand it.  A third option - which I have
not tested yet - is to see if the same performance win occurs over a
pipehandle with fragmented packets.  I haven't pursued that yet since
wireshark shows the Windows preference for AndX SMBs in this case, and
I'm doing this in search of improved performance.

Suggestions?  Thanks!

-Del

> Mike, don't fret.  I am not asking you to change anything.  What it
> tells me is that it's a semi-bug, and that if I were to implement a
> change that Feels Good, I could hand it back to you for your consideration.
>
> A problem with any software source is that it isn't always clear when
> the developer did something for a Good Reason or Just Because.  :-)
>
> Thanks,
> -Del
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.samba.org/pipermail/jcifs/attachments/20100430/5cbc5935/attachment-0001.html>


More information about the jCIFS mailing list