relationship between DCE/RPC and NT Named Pipes.

Luke Kenneth Casson Leighton lkcl at
Thu Jan 10 11:13:03 GMT 2002

On Thu, Jan 10, 2002 at 12:45:54PM -0500, Cole, Timothy D. wrote:

> > -----Original Message-----
> > From: Luke Kenneth Casson Leighton [mailto:lkcl at]
> > Sent: Wednesday, January 09, 2002 21:15
> > To: tng-technical at; David Allan Finch;
> > rsharpe at; samba-technical at;
> > freedce-dev at
> > Subject: Re: relationship between DCE/RPC and NT Named Pipes.
> > 
> > 
> > On Thu, Jan 10, 2002 at 12:49:27AM +0000, Luke Kenneth Casson 
> > Leighton wrote:
> > > unix supports TCP (guaranteed and ordered data delivery)
> > > and it supports UDP (guaranteed message sizes) but it doesn't
> > > support both.
> >  
> > 
> >  [if anyone has a more technically accurate explanation,
> >   i'd appreciate it if you could clarify this limited
> >   and ambiguous assessement: you know what i mean, but
> >   i can't find the words].
> > 
> >   lkcl
>  SOCK_STREAM (e.g. TCP in PF_INET) sockets provide the following semantics:

>  SOCK_DGRAM (e.g. UDP in PF_INET) sockets provide the following semantics:


>  NT named pipes supply (as I understand it):
>   - stream and datagram-based operations (read/write/send/recv equivalents)
>   - connection-oriented
>   - message granularity
>   - guaranteed order
>   - guaranteed delivery
>   - credentials passing
  i believe that the "message mode" flags can negotiate some of
  these capabilities, you would have to double-check in the
  MSDN CreateNamedPipe function description.

> To supply the characteristics of NT named pipes for message passing, you
> need to layer a message-boundary aware protocol over SOCK_STREAM, and back
> up the socket with your own buffer to collect message fragments.

the [first-version!  i have to say this!] implementation that
i added to samba-tng has a rather unusual message-boundary

basically i rely on three things:

1) i assume that data sent over SOCK_STREAM over unix-domain-sockets
never gets fragmented.

2) even if it _did_ get fragmented, doing two reads-in-while-loops:
one of 16 bytes to read the rpc header, followed by a second set
of reads-in-a-while-loop of the rpc length that the rpc header
says the PDU is [minus 16 bytes of course].

basically: using read_socket_with_timeout(), iirc, in
util_sock.c, to guarantee that data read _is_ the amount needed,
and no less, or an error, or a timeout.

3) SMB timeouts are 30 seconds in length.  waiting for a timeout
on the proxy "named pipe" implementation of... say... finger-in-the-air
... fifteen seconds, is an acceptable timeout value to be able
to say "the server at the end of the named pipe is never going
to respond in time.  return a sensible SMB error code before
the SMB client times out, too!"

> If you're feeling really masochistic, you can use SOCK_DGRAM (UDP) and
> layer a protocol to handle sequencing and guaranteed delivery of messages
> over it (backing up the socket with a buffer/list so you can reorder
> out-of-order messages and wait for resends of missing intermediate
> messages).
> It also requires you to do something about connection-oriented situations.
> UDP specifically also lacks TCP's congestion control, so as far as being
> a good Internet citizen, TCP is probably preferable.
> Hrm.  I forgot credentials passing above.  Oh well.
> Fwiw, if I remember correctly PF_UNIX sockets only support SOCK_STREAM.

linux _does_ support SOCK_DGRAM on unix domain sockets.

> Unlike
> PF_INET, however, they also support passing file descriptors (which are
> really
> just Unix's abstraction of capabilities (in the sense of capability-based
> security) plus a little state information -- maybe 1/2 of credentials
> passing).

> I _should_ note that there are such beasts as SOCK_RDM and SOCK_SEQPACKET
> in Unix (SOCK_DGRAM + reliable delivery, and SOCK_STREAM + message
> boundaries,
> respectively), but I don't know offhand what protocol families they are
> normally
> offered for.  Not PF_INET, unless there are some standard IP-based protocols
> I don't know about.
> (is this what you meant, luke?)

yes.... well... not the credentials as in "unix" credentials.

i mean sending the NET_USER_INFO_3 structure that you get back
from the PDC when you do a NetrSamLogon.

when you get an SMBsesssetupX request, you must connect to
netlogond-or-equivalent for that domain [regardless of whether
that's on loopback or it goes out over-the-wire: see below
for why].  you make a NetrReqChal, NetrAuth2, NetrSamLogon.

eventually, the logon requests ends up at the PDC.  when the
PDC responds to the logon request, it returns one of these
NET_USER_INFO_3 structures.

you _must_ cache this structure and associate it with the SMB
session: it contains vital information that must be used to
answer RPC and LANMAN queries [this is why you must call in to
netlogond even over the ncacn_np interface even over loop-back.]

the NetWkstaUserLogon function and a couple of other LANMAN
queries query the workstation for any logged-on users, and
you are supposed to respond with the cached NET_USER_INFO_3

so, it's all in userspace, no dependence on weird capabilities,
file descriptor passing that doesn't exist on some unixen:
ux-dom-socks are pretty standard _if_ you stick to SOCK_STREAM


More information about the samba-technical mailing list