Andrew's roadmap for HEAD (was Re: dce/rpc "client" api)

Elrond elrond at samba.org
Tue Aug 22 17:44:01 GMT 2000


[I've attached Andrew's full posting, because I wasn't able
to find this in the archive, so other's have it too]

Hi Andrew,

Okay. This sounds quite useful to me.


For me this sounds like "DCE/RPC over in-process-API".
(ncalrpc_api?). So this is simply just again another
transport for DCE/RPC.


I would suggest the following approaches (for
interoperatibility with TNG and interop with other stuff):

1) A "client-side" implementation is written, that
   resembles cli_connect.c in TNG, so that ncalrpc_api is
   just another transport, that is supported by the
   client-side APIs.

2) And I also would like that smbd then forwards/redirects
   its DCE/RPC-connection over this client-side-API. (TNG
   already does that)

3) I would like "DCE/RPC over unix-sockets" to be added to
   the same API as mentioned in 1 as a fallback after
   trying ncalrpc_api. (This can of couse be switched of by
   smb.conf-options, if you like)


So why am I asking for this?

1)

    - It might be necessary (i.e. clarity) to use
      client-side calls in the server-side backends. If
      those calls are targetted at the local machine, they
      should of course be handled properly. Otherwise they
      should be properly forwared over SMB (or TCP, port
      445, whatever)

      (This is actualy used in TNG: lsarpcd asks samrd a
       lot of stuff. And the code is now somehow clearer)

    - It makes 2 and 3 lots easier.

2) 

    - This simply makes stuff more clearer. smbd then is
      something like an endpoint mapper to some degree.

3)

    - I would like easy interop with TNG.
    - This allows for extending the known pipes of smbd
      also in the case, that the OS does not support
      dlopen().


I don't think, I'm requesting a lot of coding-effort. A lot
of this stuff is already done in TNG and can be looked at
(or, reviewed and copied)


Okay, I have to say something about dlopen() too. From
Andrew's mail it sounds like any OS out there supports
dlopen perfectly and there are no problems.

As I've followed libtool since it was in its 0.something
stages (when it was used in gtk) I know, that shared
libraries are already complex enough. dlopen-support was
added to libtool some time later. And this was even as
complex as the first step.

This said, I have to say, that there _are_ OSes out there,
that don't properly support dlopen, or only support a
subset.

For example, it is nearly impossible to refence global
variables of the dlopen_ing_ "main" program from the
dlopen_ed_ "module" in aix <=4.1.

Next thing: Some OSes have a global name space for their
modules, so the names in this name space must be unique no
module may define the same name as anothe module. I don't
know, which OS this is (possibly ELF-related?), but the
zsh-maintainers have documented this in their
extension-docs. (--> pipe_all.so isn't directly possible on
those OSes)

Lastly, you will have to fight a long versioning-battle.
Gtk/glib had this fight for a long time, until they got to
a point, they liked (well, I didn't like the result, but I
know, they got a solution, and I know, I don't like it for
personal/devel-reasons). The NSS guys in glibc already have
this too. They have different NSS-modules for glibc 2.0 and
2.1. (both cluttering up my hard disks)


Okay, that all said: If you want dlopening, go for it, but
please consider my requests at the beginning/middle of this
mail.


    Elrond



On Tue, Aug 22, 2000 at 05:58:20PM +0200, Sander Striker wrote:
> 
> > Agreed. I guess the decision on daemons or libraries has
> > slipped my mind. I think it makes sense to do a publication
> > of some kind of technical roadmap, at least for HEAD, at
> > certain intervals (each month?).
> 
> I doubt anyone has time for that frequent a roadmap, but I hope that
> the following with give you some idea of the reasons behind this
> particular decision.
> 
> The architecture that I want to see for rpc services in Samba 3.X is
> like this:
> 
> - we have a jump a jump table just like api_fd_commands[] in the head
>   branch version of srv_pipe.c.
> 
> - on platforms with working shared libraries and dlopen() (that's
>   nearly all platforms that Samba supports) the api_fd_commands[]
>   table will be completely empty. None of the pipe implementations
>   will be linked into Samba.
> 
> - when smbd gets a open on a pipe called FOO it will first look in
>   api_fd_commands[] and then when that fails (as it will always do for
>   modern OSes like Linux and Solaris) it will look for a shared
>   library called pipe_FOO.so in a Samba library directory.
> 
> - that shared library needs only one public entry point, called
>   pipe_init(const char *pipename). That function returns a pointer to
>   a structure containing information about the pipe (flags etc) plus
>   pointers to functions that implement the various functions needed to
>   implement the pipe. The main function will be the equivalent of
>   the api_FOO_rpc(pipes_struct *p) that we have now in each pipe
>   implementation.
> 
> - notice that pipe_init() takes the pipe name as an argument. That
>   allows a single shared library to handle multiple pipes (or even all
>   pipes). 
> 
> - if the dlopen() on the shared object fails then the pipe open will
>   fail.
> 
> Now I better explain why I think this is a good system. 
> 
> First off, I'd like to point out that the above allows us to fairly
> trivially implement a daemon architecture for some (or all) of the
> pipes if we want to. To do that you just make the pipe_FOO.so do
> whhatever it wants to in order to talk to the daemon. You could even
> have a single pipe_all.so with symlinks to the other pipe names and
> have that shared library just be a shim that calls to daemons via
> whatever IPC mechanism you like.
> 
> Next, note that the above scheme gives us a smooth migration path from
> our current code base. In particular, note that a shared object loaded
> via dlopen() has access to the global variables declared in the main
> program (smbd in this case). You have to set some OS specific flags
> at compile and/or link time to enable this, but I have tested it on
> quite a few platforms and it does work on every platform that I have
> tried.
> 
> Why is it significant that we can access global data structures?
> Because inside smbd and our libraries there are lots of assumptions
> that certain global variables will be available, particularly those
> related to secuurity. By using a scheme where this code keeps working
> we can migrate to the new architecture without "breaking the world". 
> 
> This scheme also allows sysadmins to add new pipe implementations on
> the fly, without even recompiling Samba. Say a sysadmin wants to use
> official Debian .deb files for Samba, but also wants some new pipe,
> like exchange on their system. They just copy pipe_exchange.so into
> the Samba lib directory and without even restarting Samba the exchange
> pipe becomes available. 
> 
> One other advantage of this scheme is that it smoothly gives the
> ability for a "bleeding edge" and "stable" version of the pipe
> implementations while using a common smbd. And we get all this without
> having to invent a new protocol for smbd to talk to external daemons.
> 
> Note that I didn't invent this whole scheme. Those of you familiar
> with NSS (ie. the code behind /etc/nsswitch.conf) will recognise that
> I am just borrowing the idea. It is also used in other applications
> and a very similar system is used in things like PAM. The current
> Samba VFS code uses parts of the the above ideas as well.
> 
> Anyway, I hope this gives you some idea of what I am thinking. I have
> explained this scheme to Luke a couple of times and at the time he
> seemed to like it. He may have since found some problems with it, in
> which case I would welcome technical discussion of those problems.
> 
> Cheers, Tridge




More information about the samba-technical mailing list