[jcifs] max_xmit - why 4280?

Michael B Allen ioplex at gmail.com
Fri Mar 12 10:05:42 MST 2010

On Fri, Mar 12, 2010 at 11:19 AM, Del Merritt <del at alum.mit.edu> wrote:
> On 03/12/2010 10:15 AM, Michael B Allen wrote:
> On Thu, Mar 11, 2010 at 5:46 PM, Del Merritt <del at alum.mit.edu> wrote:
> On 03/11/2010 05:28 PM, Michael B Allen wrote:
> On Thu, Mar 11, 2010 at 12:14 PM, Del Merritt <del at alum.mit.edu> wrote:
> A question was raised a while back about getting a DcerpcException with the
> message "Fragmented request PDUs currently not supported":
> http://article.gmane.org/gmane.network.samba.java/6745/match=fragmented+request+pdus
> I am seeing this too, and it appears that the values for max_xmit and
> max_recv are hardcoded to 4280 in jcifs/dcerpc/DcerpcHandle.java; I'm using
> JCIFS 1.3.14 as a baseline.
> Is there a reason for this limit?  Is it something that should be negotiated
> and just hasn't happened yet?  I'm still learning the ins and outs of CIFS
> as I am porting JCIFS to a J2ME environment.
> Hi Del,
> Those are just observed values.
> Sorry - would you elaborate?
> The values for max_xmit and max_recv are values that Windows clients
> have been observed using. Windows is as Windows does. So we just copy
> what Windows does and you should too.
> Fair 'nuf.
> Note that JCIFS supports fragmented response PDUs. It just doesn't
> support fragmented *request* PDUs. It's somewhat unusual for a DCERPC
> request to contain a lot of data. Usually it's the response that is
> big.
> I am trying to send a large chunks of data (that when assembled will be
> a "file") to another system that I discover via JCIFS.  To minimize
> overhead - mostly in the form of time spent - on the sending host, I'm
> trying to make those chunks as big as is reasonable.
> I think you're confused. The DCERPC layer has nothing to do with
> sending files.
> I probably am somewhat confused.  What I am trying to do is to discover
> systems with printers and then send data - a "file" - to a printer in what
> is considered "RAW" mode.
> I do the send-the-data step currently thus:
> get a DcerpcHandle to the server's printer
> loop sending the data with a class that extends DcerpcMessage in 4Kish
> chunks
> The chunk size is dependent on max_xmit since I am using the DCERPC layer.
> Sorry that's greatly simplified/reduced.  To replicate the code here would
> require lots of classes.  At the protocol level - e.g., what wireshark shows
> - I'm basically doing:
> a DCERPC bind to the host's \PIPE\spoolss
> a SPOOLSS\OpenPrinter
> a SPOOLSS\StartDocPrinter
> a bunch of SPOOLSS/WritePrinter calls, and finally, when the data is
> drained,
> closing things down when it's all done
> What would you recommend as an alternative?

Hi Del,

So you *are* using DCERPC.

However unfortunately changing those values will not help because
there is an upper limit of 64K.

As I stated before, JCIFS can handle fragmented response PDUs but not
fragmented *request* PDUs. Meaning it can read multiple chunks from
the DCERPC service but in your case, you want to send fragmented
request PDUs. JCIFS simply does not have the logic to handle that. It
can only send one request PDU and a print job will not fit in that.
Not even close.

Unfortunately you're basically SOL because to fix this would require
modifying DcerpcHandle.sendrecv and if *I* didn't do it that means
it's probably not trivial (but I don't recall why I didn't do it).

If you're really bent on getting this to work, you're going to need to
get a capture of a print job being submitted and study it carefully in
WireShark and modify DcerpcHandle.sendrecv to handle fragmented
request PDUs. Unfortunately that's about the extent of what I can help
you with. As much as I would like to see JCIFS have this working, I
just don't have the time.


Michael B Allen
Java Active Directory Integration

More information about the jCIFS mailing list