tridge at samba.org
tridge at samba.org
Thu Oct 19 22:42:41 GMT 2006
> That's what I'm trying to figure out - hence why I am experimenting with
> running TORTURE-TALLOC as a subprocess. I don't really see the problem
> with having more binaries - they'll live under $LIBDIR/torture, not in
That in itself creates problems! When I then run smbtorture will it
run the one in the install dir or the build directory? Will I run one
from an old tree, which version does it run? Does it mean smbtorture
will not work if I haven't done a 'make install' ? Or will it have
some search path, and thus have a difficult to predict behaviour?
A single binary is so simple, and that simplicity has been a huge
benefit to us.
> The alternative is having a bunch of torture_* macros on top of
> testsuites that need to be able to run standalone
that's not the only alternative :)
As I see it, what you are trying to do is to capture output from these
tests into your torture_ infrastructure. You are currently planning on
doing that by capturing stdin/stdout on a child process. There are
alternative methods that don't involve a child process :)
Here are some possibilities to consider:
1) pass in a logging function to the tests. The logging function
would be similar to fprintf(), and you could even make it
fprintf() compatible. Test suites like the talloc one would call
this logging function. That will feed into the same parsing code
you have now (the parse code for child processes).
2) if you really want to get down and dirty, then dup2() file
descriptors 0 and 1 onto different fd numbers, then use a pipe for
fd 0 and 1, and capture the output internally. That means that
printf() inside the tests will redirect appropriately. Not
brilliantly clean, but quite possible. It's even doable on systems
that don't have dup2() (using the semantics of fd assignment
3) declare simple versions of the torture_ functions either as part
of libreplace or as part of a lib/testsuite/ library that is
linked into all tests, and included with talloc, libreplace etc.
I personally prefer (1), but I think all of the above would be much
better than lots of little binaries and child processes.
> We already build two versions on smbtorture - one in $srcdir/bin
> that contains paths in $srcdir (TORTUREDIR is defined to be
> $SRCDIR/bin/torture, for example) ~/some/branch/bin/smbtorture
> works fine at the moment.
what?? Why on earth are we doing that?
This is smelling more and more like the dreaded libtool approach. When
I test a binary in my source tree I want it to be absolutely identical
to the one I install. I don't want the install procedure to mangle it.
Wow, I just did a 'make installbin' and I see it chews up an extra
250M in bin/install/ now. Yuck!
I know that games need to be played when using shared libs (which is
one of the most idiotic design decisions ever made - shared libs
should make life simpler not harder!!), but I am not using shared
libs, so I don't want to have to play these silly games.
Can we please get rid of this?
> That means smbtorture as one big monolithic app that depends on all
> the rest of Samba...
yep, exactly as I like it!
> I'd rather say it's the unix way of doing things... having a lot of
> smaller binaries that each do one thing and do it right, rather than one
> monolithic application.
oh, you've invoked the 'unix way' argument :-)
Have you run 'type echo' in bash lately? What about kernels - the
'modular unix way' would imply micro-kernels, and that didn't work out
We already have _far_ too many binaries in Samba, and historically it
has bitten us badly with all our different command line syntaxes for
each binary (despite several efforts to unify them), and problems with
running out of disk while building Samba (remember the several GB it
takes on some platforms with -g?).
The 'unix tools' stuff comes from the days of old unix, when the
entire unix source tree was smaller than what we now have in Samba. It
was from the days when you could boot a unix system off a single
> > > Using valgrind on ./bin/torture/LOCAL/TALLOC or using the
> > > --trace-children option to valgrind should fix that.
> > yep, but that doesn't help on the build farm
> Why? Can't we use it on the buildfarm?
I've been bitten by --trace-children too often, and its one of the
reasons I run smbd with -M single when developing. It doesn't work at
all with --db-attach, and seems to also cause problems in simpler
> > > Alternatively, we could get smbtorture to wrap the commands it was
> > > executing in gdb or valgrind, though I'd prefer to avoid that if
> > > possible.
> > I'd prefer to avoid that too
> It's still possible though to run gdb on the individual binary.
When I run tests in Samba, I run smbtorture, and then add gdb --args
when debugging it. I don't want to have to think about what binary its
> > What difference is there apart from the tiny main() function?
> We'd need wrapper macros/functions for the report functions
see above alternative solutions :-)
All you're doing now is using fork/exec as a way to capture stdout and
stderr. That's a very heavy handed approach to intercepting IO !
> It makes it impossible to run more than one test with smbtorture
> though as smbtorture will go down with the tests.
If smbtorture crashes, I want it to crash! I don't want it to sail on
after some part of the tests have seg faulted. This code that you are
trying to isolate is linked into smbd, and we certainly don't want
smbd to sail on when it seg faults.
> Yes, though it'd be rather ugly having one function call
> report("test: foo\n");
> and then have that function parse it's arguments.
how is it more ugly then doing exactly the same thing, but on a
fork/exec child that sends stuff to stdout/stderr and then parses
those stdout/stderr streams? You're doing exactly the same parsing.
More information about the samba-technical