tridge at samba.org
tridge at samba.org
Fri Oct 20 04:54:56 GMT 2006
> That makes it hard to run perl-based tests, for example, but I guess I
> can live with it.
yes, for perl based tests the child is the only sane choice (calling
perl from C is a nightmare - its one of those things that seems easy
when you first try it, but quickly descends into a quagmire).
btw, I've been looking at some of the current build failures, and I
think at least some of them are caused by the piped_child() code. It
assumes that every platform has socketpair(), and socketpair() is
rather inexactly emulated by pipe() in lib/replace/. The big
difference between pipe() and socketpair() is that a pipe() is (on
some platforms at least) a non-symmetic pair of file descriptors,
where you must only read on the first fd and only write on the 2nd
one. By contrast, a socketpair creates a pair of symmetic fds where
you can read or write to either.
This should be easy to fix, just use pipe() on all platforms and use
get the fd and fd ordering right, plus open 0 on /dev/null to
give an empty fd for stdin. Or we can correctly emulate socketpair()
using something like the socketpair_tcp() code that was used in the
old torture code in Samba.
I've also got a working piped_child() in junkcode here:
> All systems that Samba4 builds on have dup2() - we're relying on it now
> to run LOCAL-TALLOC and that seems to be ok on all hosts.
> However, this prevents output from being written to the screen
> immediately. You'll have to wait until an entire testsuite finishes
> before smbtorture can read from the fd and send output to the screen.
There are hackish games we could play to work around this, but they
are not pretty. For example, we could have a timer event that reads
from the pipe and writes to the real stdout, or we could use the same
techniques as socket_wrapper to intercept a few calls. I think I
prefer method (1), the logging function being passed in.
> > 3) declare simple versions of the torture_ functions either as part
> > of libreplace or as part of a lib/testsuite/ library that is
> > linked into all tests, and included with talloc, libreplace etc.
> I'd rather not have another dependency for talloc, replace, ldb, etc...
> if we would like to allow test discovery (knowing which tests are going
> to be run beforehand), then this code is non-trivial.
the versions of these functions that are used in the standalone test
suites (ie. when for example talloc is built standalone) would be very
different and much simpler than the functions that smbtorture
provides. Basically torture_comment() and torture_fail() would map to
a printf() call. We could actually put these in libreplace.
Hmmm, I just had an idea ....
We already have to support the BSD formatted error functions for
heimdal. Those are the err(), errx(), warn() and warnx()
functions. Try 'man err' on a Linux box for the man pages.
What if we used those as the basis for our 'external' tests? So the
talloc and libreplace test suites would call errx() and warnx() on
errors and warnings appropriately. Then libreplace would provide
replacements for those (currently they are supplied in
heimdal_build/replace.c) but the libreplace versions would have a nice
little hook that can be set to redirect the functions to the torture
ui parse code.
That way libreplace is doing what it is supposed to be doing,
replacing standard functions, our external test suites will use
documented functions and smbtorture can intercept all error and
warning output easily.
> Because if you're using shared libraries then you need something like
> that if you want to be able to run binaries from the source directory
> without the need to install libraries to /usr/lib first or requiring the
> user to export LD_LIBRARY_PATH.
I don't mind supporting the use of shared libraries, and I like the
idea of us providing shared libraries for other projects, but I don't
like us having to pay the price in terms of disk space and complexity
when we are building without shared libraries locally (which I think
is a good default).
When a user does choose to build with shared libraries, I don't think
the LD_LIBRARY_PATH is such a problem. We can have a file 'shlibs.sh' in
the source directory and users can do:
to set the right environment variables. This won't be needed if you
haven't enabled shared libs in the build, but if you have then it
allows us to avoid the double binaries.
> The shared libraries are necessary for the openchange folks or others
> who would like to link against our libraries.
yes, I agree that exporting our code as shared libs is a good thing. I
just want to minimise the cost to us as developers of having that
> Sure, but that means abandoning the quest for shared libs. If we're ever
> to support them properly, we need to have them be built on developer
> machines. Shared lib builds break easier than static ones, and the
> shared lib support will just bitort.
we can build shared libs on some of the build farm machines, so we'll
get an email from the build system when it breaks. That will work as
long as those emails aren't ignored :-)
> I can see the point in integrating things that are related, but
> smbtorture is basically a big switch() statement to run whatever test
> the user specified.
those tests are in fact strongly related - they are all testing
components of Samba. They are also highly intermixed. The talloc test
does things that particularly stress talloc, but just about every
other test also tests talloc indirectly. The same is true of
libreplace, ldb, tdb etc.
> Using binaries makes it very easy to add tests - adding a test is as
> simple as dropping a binary into the torture directory. Tests can be
> written in any language - and tools to do analysis on test results can
> be written in 5 minutes.
It was pretty easy to add LOCAL-TALLOC and LOCAL-LIBREPLACE using the
old 'linked in' system as well :-)
> These aren't 'regular' binaries. They're not installed into
> bin/. The command line syntax is strictly enforced because they're
> being called by smbtorture
except that to answer some of the problems I raised, the solution was
to run them standalone :)
> They can't confuse the user as they live in a private dir.
In this case we are the 'users' of smbtorture, and it sure confused
I really love smbtorture, and I think the single binary has been
great. I don't want to move away from that.
> The output of the individual test binary can be run through something
> different than smbtorture though, such as a script that generates HTML
> or stats.
ok, but I presume that will still work for the built-in tests? Or were
you thinking of having a separate binary for all our RAW-*, RPC-* etc
> The test binary doesn't have to be written in C and doesn't have to call
> a specific function. This means it also works with shell scripts, python
> scripts, perl, etc.
yep, I think the piped_child approach is fine for shell scripts, perl
and python scripts. I would like to keep it internal for js scripts
and C code.
> I'm not saying we should ignore the segfault, but if we have one test
> segfault that shouldn't mean none of the other tests run.
Actually, that raises an interesting point. One thing that might be
useful in the GUI is a "stop on error" button, so you can then attach
gdb and see the error in more details (or just look at the most recent
packets in wireshark). If the tests are a child process talking over a
pipe, how would you stop on an error? :-)
> * There seems to be a lot of resistance against having separate
yep! I'm leading the resistance, but there seems to be some other
dissenting voices too :-)
> * Is building separate binaries in bin/ and bin/install/ considered a
> bad idea? Will building with shared libs enabled by default where
> possible be a problem in the future? If so, I think there is no point in
> pursuing building any of our libs as shared libraries.
I'd like to explore some other options for this. I'm not ready to give
up on shared libs completely just yet, but I'd like to see if we can
work out an alternative to the changing binaries on install :)
More information about the samba-technical