MIDLC IDL Compiler

Andrew Tridgell tridge at osdl.org
Sat Jan 15 04:33:41 GMT 2005


Mike,

 > The limit pointer is just a "fence post".

I'll try one more time to explain my concern. If you don't understand
me this time I'll just give up.

Lets look at the pattern you use for enc_XXX():

  int enc_TYPE(struct ndr *ndr, TYPE *obj, unsigned char **dst, unsigned char *dlim);

You tell me that dlim is a destination "fence post" limit pointer. So
I presume this means that it is the limit of memory to write to when
encoding structures into NDR blobs.

Now notice that it is a "unsigned char *" not a "unsigned char **",
this means the encoding function can't change its value.

This implies to me that you are pre-allocating the destination buffer
_before_ encoding. But what size do you pre-allocate? There is no way
you can know the size before encoding starts, which implies that you
must be pre-allocating some arbitrary large size and hoping the
encoded data doesn't grow beyond that size. So what size do you
choose? 1k? 8k? 1M? 1G? 

Note that none of the code you have sent me shows 'dlim' being used in
any way, its just being passed around everwhere. I guess its checked
in base functions like enc_ndr_long(), or perhaps you just don't check
it at all at the moment.

A related problem is the use of 'deferred' and 'dst'. In the code you
use deferred as a way of holding a buffer offset until later. But this
makes no sense if you ever call realloc() on the buffer, as then
deferred would become invalid. This implies to me that you don't
realloc() the destination buffer, which again tells me you are using a
fixed size pre-allocated buffer.

So, using 'slim' can make sense, as the source buffer when decoding
doesn't move. Using 'dlim', 'dst' and 'deferred' on the encoding side
makes no sense at all from a C API point of view.

Cheers, Tridge


More information about the samba-technical mailing list