New python PIDL checks cause 2221 new Coverity warnings
abartlet at samba.org
Thu Aug 27 02:44:31 UTC 2015
On Wed, 2015-08-26 at 14:02 +0200, Kai Blin wrote:
> On 2015-08-26 00:24, Andrew Bartlett wrote:
> Hi Andrew,
> > Yes, we expected the compiler to optimise this out. That was a
> > deliberate design pattern to avoid trying to teach PIDL the size
> > and
> > maximum values of 'long' and all the IDL types (both classes of
> > which
> > change size depending on the architecture), and instead chose to
> > rely
> > on the compiler to handle that.
> I'm a bit confused by this. On the first reading, I thought you meant
> that PIDL data structure sizes were architecture-dependent. Which
> would imply that the network traffic produced was architecture
> as well. As that doesn't really make any sense, I take it you meant
> something else, and I guess it is "depending on the architecture, we
> need different types in order to fit the IDL types". That in turn
> like something we want configure to sort out for us, instead of
> (potentially undefined) run-time checks and hoping for the compiler
> optimize it out.
Specifically, it is the mapping of an arbitrary value of type:
and working out if it will fit into a Python Int, or if a Python Long
A python int will fix a value up to LONG_MAX, which as you know is arch
Jelmer expressed a strong preference that we not just use a Python Long
always, due to the memory overhead, and it creates a subtly different
type in Python2 anyway (L suffix).
Finally, we should try to keep the solution as generic as possible, as
when we add new base types to PIDL, all these lists need to be updated,
and the person who is doing so probably won't remember the full
> > The idea was that consistent code across all the types and
> > architectures (duplicated only for signed/unsigned in one area),
> > and so
> > relying on the compiler to do it's work would be smarter than
> > encoding
> > the compiler's knowledge in PIDL, and risking getting that wrong.
> I'm not sure "consistent code" is worth drowning out the Coverity
> results in a noise of false positives (or rather true positives that
> want to ignore). We could autogenerate the size tables during
> I guess.
> > The biggest challenge is that a number of cases only really come up
> > on
> > 32 bit system, where a python long can't hold a uint32_t (on 64 bit
> > it
> > can), and so PyInt_FromLong can fail.
> Who's still using 32bit systems these days? Only my ARM boxes still
> 32bit. :) Anyway, I'm still not convinced that this isn't exactly
> should be figured out during configure.
> > In short, when writing it we didn't consider Coverity giving
> > warnings
> > about this, we just expected the compiler to optimise it.
> I see no point in crying over spilled milk. So now that the plan has
> collided with reality, how do we proceed to make our Coverity useful
I'm really not sure. This really wasn't the quick-and-dirty solution,
it was the best we could come up with (I worked with Douglas on it, as
he had started on some 32 bit patches in this area). I originally
tried generating different code for different cases, and we ended up
with a tangled mess of duplicated PIDL generators, and still much the
same problem due to the 32 bit python integer limit.
The only way forward seems to be to add #ifdef based on bit size, but
this is something we have been very hesitant to do in the past, for
Productive (ie, please look at the generators, this isn't a simple
area) suggestions, and patches are most welcome, and gladly this area
is now tested so we have a hope of validating the result.
Making this all just work on 64 bit is easy, it is the 32 bit case that
Authentication Developer, Samba Team https://samba.org
Samba Development and Support, Catalyst IT
More information about the samba-technical