RFC trying to support non RPC Pipe service(s)

Stefan (metze) Metzmacher metze at samba.org
Tue Jul 8 03:03:38 MDT 2014

Hi Noel,

>>> I have been looking into MS-WSP which communicates over SMB with named
>>> pipes. However, unlike other named-pipe services it doesn't use RPC. The
>>> patchset attached is a WIP implementation of some infrastructure heavily
>>> based/influenced by the RPC framework to help provide services that use
>>> pipes in that manner and/or create clients that want to communicate with
>>> such services.
>>> In my own mind I have been calling this mode of operation 'raw' pipe (as
>>> opposed to RPC) and I use that term heavily in naming of functions,
>>> files etc.  However, it strikes me that probably the term might mean
>>> something completely different in the SMB world (which I know little
>>> about), if so it would be good to get input for a different term or name
>>> to use if appropriate.u
>>> One of my main worries is that I have missed something completely
>>> obvious and that my little bit of work here is completely unnecessary so
>>> it would be good to know if that is the case, e.g. is this nuts?, should
>>> I have done/used something else?
>>> Some other random notes:
>>>   * Currently although the code here should (afaict) be independent of
>>> the RPC implementation it is based on, it lives in the same directory
>>> structure, I can change that of course but I would like to get some more
>>> direction before make possibly useless changes
>>>   * perhaps more could be shared with the existing RPC code :-/ but for
>>> me at the moment there is clarity in the separation
>>>   * I'm aware the DEBUG statements are too many and at inappropriate
>>> debug levels ;-)
>>>   * there is only a source3 implementation, I notice there are also
>>> source4 implementations for rpc servers (but I confess I haven't looked
>>> to see what differences there are or why they are duplicated)
>>>   * similarly the idl only generates a synchronous client helper at the
>>> moment
>>>   * there is a reference implementation of a server and client in the
>>> patch set which probably illustrates things better
>> I'm not sure we really need a heavy wight infrastructure like this,
>> basically copying multiple layers of DCERPC infrastructures.
> understood, to be fair I was hoping to get to a situation where most of
> code that I am (semi) duplicating could be shared somehow
>> And as every protocol that's implemented on top of named pipes
>> is different,
> In this case there are at least 2 similar protocols (no idea if there
> are more) so it seemed maybe a good idea

I only now about MS-WSP which would use the "raw" infrastructure, what
is the other one?

>>  I'd propose to avoid a generic infrastructure at all.
> I guess when I got to the point of thinking about how to provide the
> server code (and following the rpc stuff for reference/learning) it
> looked neater to hook into the existing loop there (and indeed there is
> only a small amount of code on the server side is needed to do that)
> That is in part because I didn't seperate it out either, it was
> convienent to just make the decision in new method
> 'handle_internal_pipe_socketpair' routine and I don't need to do any
> hard coded switching on pipe names. It didn't seem so straightforward
> with the client side (although in the intitial implementation with some
> creative code switching it was possible to share alot of code, however
> the result was very very ugly :-)

There we can share some of the code, but first we need to remove
everything that's
dcerpc specific. We just need an infrastructure to register a wellknown pipe
names and a function (+ private_data) that starts the server loop for
the internal service.
It also needs to specify if the pipe should be in byte or message mode.

For external services we should just try to connect the unix domain socket.

>> The raw socket handling is already generic:
>> - tstream_npa_accept_existing_send/recv() handles this for the server
>>   see the code arround setup_named_pipe_socket() in
>> source3/rpc_server/rpc_server.c
>>   or tstream_setup_named_pipe() in source4/smbd/service_named_pipe.c
>> - tstream_smbXcli_np_open_send/recv handles it for the client.
>>   something like your patch to change the fragment size should be fine.
>> So basically the client and server just have a tstream socket, which
>> represents
>> the named pipe. The server decides it is used in message or byte mode.
>> So each side can read and write bytes to the socket.
>> Then we helper functions to read a pdu from the socket
>> and a helper function to write a pdu into the socket.
>> They can use idl generated structures, but only it's easy to represent
>> the protocol in something similar to ndr.
>> The server can be a completely separated binary or forked from the main
>> smbd.
> I am still fond of the idea of getting as much stuff for free as I can,
> if you don't see the value in having a more generic solution then would
> there be objection to still using the (non idl generated) server side
> hooks (e.g. raw_pipe_entry in source3/rpc_server/srv_pipe_register.c) I
> could then manually register the server etc. instead of using the my idl
> generated server hooks ?

source3/rpc_server/*.c is going to die soon see

The code from source3/rpc_server/srv_pipe_hnd.c can be simplified
as we only have FAKE_FILE_TYPE_NAMED_PIPE_PROXY and don't really need
this layer anymore. It can move to source3/smbd/pipes.c.
And there we could have the very tiny infrastructure I described above.

For the client side you might use idl, but instead of adding a new
raw_pipe_handle infrastructure you can implement a backend for the
dcerpc_binding_handle infrastructure. We've already done this
for wbint_binding_handle or irpc_binding_handle.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 246 bytes
Desc: OpenPGP digital signature
URL: <http://lists.samba.org/pipermail/samba-technical/attachments/20140708/901246d9/attachment.pgp>

More information about the samba-technical mailing list