MIDLC IDL Compiler
tridge at osdl.org
Fri Jan 14 03:23:21 GMT 2005
> I thought you said before you read data from the pipe piece-meal rather
> than using complete buffers because an RPC could be several MB?
I think you misunderstood me. I was commenting on the fact that you
pass in a fixed limit pointer for the marshalling buffer. That implied
that the buffer had to be pre-allocated to the maximum possible size
it could reach.
The librpc/ndr/ layer in Samba4 does use a linear buffer to marshall
into, but it expands this buffer as it goes along, using
talloc_realloc(). See the function ndr_push_expand(). So the buffer
grows as needed.
Similarly, when unmarshalling, the ndr layer generates talloc
allocated structures at each level.
> The code I looked at was the ndr_XXX.c files but once it got down to the
> IVAL macros I didn't look further (wasn't really sure where to look
maybe the stuff to read first is librpc/ndr/*.[ch]. That is the core
set of NDR handling routines that is called by the generated code.
> Just for the record, the reason I chose not to use PIDL in the first place
> was because I wanted MIDL compatibility and language/application
> independance. So there's no point in reading or testing anything that PIDL
> generates because no part of that could be used in a stand alone
There is a point in reading it, as you may find things that you hadn't
thought of. For example, you may not have looked at the consequences
of relative pointers. These are in MIDL, but are not documented, and
are essential for several IDL interfaces used by Microsoft. Having to
deal with relative pointers may well change the way you structure your
compiler. This might also be true of the "subcontext" stuff we have
You may also find that you like some of the extensions we have made to
IDL to make building IDL based code much easier.
> I can't do that Andrew. I'm not doing this just for Samba. Top-level
> "glue" is one thing but the leaf routines are generated by some internal
> functions that need to be application independent. Maybe when all is said
> and done we'll find a common set of leaf-ops that can be abstracted but
> for the first pass I want to splice in at a higher level interface where
> *everything* below it MIDLC generated.
I originally considered making the low level stuff generated, but
rejected it as the lowest level stuff gets quite complex, and putting
it in a little library makes it much easier to debug and expand. It
also makes the generated code much more readable.
For example, in our generated code we make calls to
ndr_check_array_size(). That is a little library function that checks
that a size_is() restriction is being obeyed. The details of how to do
this don't need to be known by the IDL compiler - all the IDL compiler
needs to know is that size_is() must be validated to prevent buffer
overflow attacks. The NDR library then takes care of the check and the
This is even more important for string handling. It would be terrible
for Samba to have to deal directly with the utf16 arrays that
Microsoft puts on the wire. By using a helper function
ndr_pull_string() we can let the application code deal with it as a
native unix string, and have the helper function do all the dirty work
of format conversion.
> > > One possible solution to remedy this problem would be to simply reuse
> > the
> > > same buffer for all marshalling input and output and resize the buffer
> > > to be larger on demand.
> > yikes! no way. This would interact horribly with the sign/seal code,
> > and the async nature of our rpc libraries.
> How exactly would it interact horribly?
You proposed reusing the buffer. As soon as you reuse marshalling
buffers then you have to deal with the time the buffer becomes
More information about the samba-technical