DCE/RPC over SMB - nt login, code walk-through.

Luke Kenneth Casson Leighton lkcl at samba.org
Mon Feb 14 02:58:26 GMT 2000


i particularly wanted to write this so that people can follow what is
going on, here.  it's not exactly trivial, so if anyone wanted to do a
code review, they'd need to know how it fits together.

i'm going to break this down by examples, first, starting with simple ones
and moving up.  i'll probably do this as a series, because it's a bit
long.

1) rpcclient -S . -U% -l log - lsaquery command

this is an anonymous connection explicitly to msrpc loop-back.  this can
only be done as root, or by setting rpcclient to setuid to root (which i
don't recommend).

the lsaquery command does four dce/rpc function calls:
lsa_open_policy(".", policy_handle), lsa_query_info_policy(pol_hnd, 3,
...), same for level 5 and ten lsa_close(policy_handle).

start off in lsa_open_policy() (cli_lsarpc.c).  first thing that happens
is cli_connection_init(".") is called.

the open policy on "." automatically triggers a connection to the
loop-back interface, in cli_con_get().  ncalrpc_l_use_add is used to
obtain a pre-existing connection to ncalrpc:\lsarpc (lsarpc over network
computing architecture local-RPC, implemented as a unix domain socket) or
to create one if it doesn't exists.  the user credentials vuser_key are
used to distinguish connections under one user-context from others.

if this is a new connection, rpc_pipe_bind() is called which does a
DCE/RPC bind / bind-acknowledge sequence.  because this is ncalrpc, the
user-context in vuser_key is embedded in the bind request, where the smbd
pid is stored in assoc_gid and the smbd vuid is stored in context_id.  
the fields _happen_ to be exactly the right size, and they _happen_ to be
all that's needed.  *whew*.  more on this, later, from the server-end
viewpoint.

back to lsa_open_policy().

the function arguments are marhshalled and then shipped over using
rpc_con_pipe_req().  this filters through to a loop that calls
rpc_api_write() zero-or-more times, rpc_api_send_rcv_pdu() onece and only
once, and rpc_api_rcv_pdu() zero-or-more times.  the splitting of function
call arguments into segments (PDUs). is therefore divorced from the
transport _for_ those PDUs.

let's take rpc_api_write().  because this example is loop-back, a function
called msrpc_send() is called which transmits the PDU across the unix
socket.  simple, huh?  back to rpc_con_pipe_Req() ...

back to lsa_open_policy().

assuming that the transmit and receive of function arguments succeeded, we
have "modified" ([out]) parameters to decode.  in the example of
lsa_open_policy(), that's the policy handle and a status code.

register_policy_hnd() is called with the handle returned by the remote
server.  unfortunately, it's really important that the remote server
returns us a unique handle.  not doing so will cause immense problems and
resource leaks, and there's really not a lot we can do about it.  oh dear,
i wonder if this is the same problem on NT?  never mind.

why do we need a unique handle? because the next call, set_policy_con,
associates the client connection state with the server's policy handle.
this policy handle is the *only* way to connect to the server.

ok, we're done with lsa_open_policy().  on with lsa_query_info_policy().

first thing that happens is we construct the [in] parameters, the policy
handle and the info level.  we then call rpc_hnd_pipe_req.  in there,
cli_connection_get() is called, which takes the policy handle and gets the
client connection state with a get_policy_con() call that we _just_
microseoonds ago associated with it in the set_policy_con() call.  we then
do that rpc_con_pipe_req() thing we just did earlier.

back to lsa_query_info_policy().  simple enough: unmarshall the [out]
parameters back to real function arguments, which in this case is the info
leve lrequested and a uint32 status code.

ok, we're done with lsa_query_info_policy().  on with lsa_close().

first thing is that we construct the request data.  the [one] argument is
an [in out] parameter, so it's both going to be _sent_ over-the-wire and
_received_ (replaced) from over-the-wire.  again, we call
rpc_hnd_pipe_req(), which sends data using the policy handle to reference
the right connection.

close_policy_hnd() is called with the policy handle.  it is worthwhile
stepping into this function, because there's some funny-stuff going on.
there's a free_fn (a higher-order-function) associated with the policy
handle.  if this is non-NULL, it's called with the state-info associated
with the policy handle.  not obvious?  ok, let's step back to
set_policy_con(), then.  this takes a struct cli_connection* and a
pointer-to-cli_connection_unlink() function as two of its arguments.
becoming clearer what's going on?  calling lsa_close() *automatically*
calls cli_connection_unlink().

so, what happens in cli_connection_unlink()?because this is a loop-back,
ncalrpc_l_use_del() is called, which decreases the reference count (number
of users on the connection), and whent the reference count get to zero,
the connection is torn down, which will result in a close() on the unix
socket file descriptor.

that's it.  next stage, i will explain what happens server-side in exactly
the same scenario.  following that, i will go on to more complex scenarios
and cover SMB connections.



More information about the samba-technical mailing list