Improving the speed of make test

Andrew Bartlett abartlet at
Mon Mar 5 13:44:16 MST 2012

On Mon, 2012-03-05 at 17:44 +0100, Jelmer Vernooij wrote:
> On 03/05/2012 08:36 AM, Andrew Bartlett wrote:
> > Jelmer and others interested in selftest:
> >
> > A while back, I started doing some profiling to determine where the time
> > is spent in 'make test', using 'perf'.
> >
> > I was surprised to find that 15% of our time is spent in routines
> > associated with SHA1, due to adding users and kinit.  Both of these run
> > a *lot* of SHA1, because salting the password for the AES-based kerberos
> > keys uses multiple thousands of rounds of SHA1, to make brute forcing
> > the password hash harder.
> >
> > The fix is simple:
> >   - change and similar tests not to create a user for each unit
> > test, but re-use one for the whole testsuite
> >   - kinit once at the start of make test, for all connections that should
> > be made as administrator.  Use that credential cache for all connections
> > instead of $USERNAME and $PASSWORD
> >   - create another user if we ever need to modify the groups of the
> > administrator (the cached PAC won't update).
> >
> > I've not got around to doing this yet, but as the python selftest
> > rewrite is under way, I wanted to ensure this was catered for in the
> > design.
> Thanks.
> I think one of the other issues with selftest is also that we're running 
> too much high level (functional) tests rather than unit tests. We can't 
> possibly run all tests with all possible permutations of Samba 
> configuration options.
> For example, is it useful to run all RPC tests against our own servers 
> with and without the bigendian option? I can see the bigendian option 
> being really useful when running tests against Windows, but our client 
> and server code is generated from the same IDL - we won't find errors in 
> the IDL this way. If we're trying to catch pidl bugs, I think just 
> running rpc-echo with and without 'bigendian' should be sufficient, and 
> more low-level tests for pidl.

I agree.  We do test a lot of stuff multiple times, and a lot more is
never tested.  We also run almost all the s3 tests with and without

One of my 'pie in the sky' ideas is to match the subunit stream with
incremental lcov results to determine which tests are adding coverage,
and which tests are just covering the same ground.

On a more practical level, if you could get me the command to retrive a
time ordered list of tests, it would help me start to attack the slowest
tests.  Also, if you could tell me how in the python test code (
in particular), to change setup code to be once-per-script rather than
once-per-test, that would help a lot!


Andrew Bartlett
Andrew Bartlett                      
Authentication Developer, Samba Team 

More information about the samba-technical mailing list