Improving the speed of make test
Jelmer Vernooij
jelmer at samba.org
Mon Mar 5 16:05:38 MST 2012
Am 05/03/12 21:44, schrieb Andrew Bartlett:
> On Mon, 2012-03-05 at 17:44 +0100, Jelmer Vernooij wrote:
>> On 03/05/2012 08:36 AM, Andrew Bartlett wrote:
>>> Jelmer and others interested in selftest:
>>>
>>> A while back, I started doing some profiling to determine where the time
>>> is spent in 'make test', using 'perf'.
>>>
>>> I was surprised to find that 15% of our time is spent in routines
>>> associated with SHA1, due to adding users and kinit. Both of these run
>>> a *lot* of SHA1, because salting the password for the AES-based kerberos
>>> keys uses multiple thousands of rounds of SHA1, to make brute forcing
>>> the password hash harder.
>>>
>>> The fix is simple:
>>> - change acl.py and similar tests not to create a user for each unit
>>> test, but re-use one for the whole testsuite
>>> - kinit once at the start of make test, for all connections that should
>>> be made as administrator. Use that credential cache for all connections
>>> instead of $USERNAME and $PASSWORD
>>> - create another user if we ever need to modify the groups of the
>>> administrator (the cached PAC won't update).
>>>
>>> I've not got around to doing this yet, but as the python selftest
>>> rewrite is under way, I wanted to ensure this was catered for in the
>>> design.
>> Thanks.
>>
>> I think one of the other issues with selftest is also that we're running
>> too much high level (functional) tests rather than unit tests. We can't
>> possibly run all tests with all possible permutations of Samba
>> configuration options.
>>
>> For example, is it useful to run all RPC tests against our own servers
>> with and without the bigendian option? I can see the bigendian option
>> being really useful when running tests against Windows, but our client
>> and server code is generated from the same IDL - we won't find errors in
>> the IDL this way. If we're trying to catch pidl bugs, I think just
>> running rpc-echo with and without 'bigendian' should be sufficient, and
>> more low-level tests for pidl.
> I agree. We do test a lot of stuff multiple times, and a lot more is
> never tested. We also run almost all the s3 tests with and without
> encryption.
>
> One of my 'pie in the sky' ideas is to match the subunit stream with
> incremental lcov results to determine which tests are adding coverage,
> and which tests are just covering the same ground.
>
> On a more practical level, if you could get me the command to retrive a
> time ordered list of tests, it would help me start to attack the slowest
> tests.
The easiest thing to do here is to get the subunit output for a full
test run (should be in st/subunit). You can then feed that into
"./script/show_test_time" which is a trivial wrapper around "subunit-ls"
to get a list of test timings. It'll output one line per test with the
test name and the timing (in seconds).
It might be necessary to split up some tests further, as some of the
existing tests are pretty big.
> Also, if you could tell me how in the python test code (acl.py
> in particular), to change setup code to be once-per-script rather than
> once-per-test, that would help a lot!
Ideally this should overlap a bit with environments in selftest.py I'd
prefer to do this in a way that we can reuse it later, either by coming
up with our own thing or by reusing something like testresources
(http://pypi.python.org/pypi/testresources). I'll follow up about this a
bit later.
Cheers,
Jelmer
More information about the samba-technical
mailing list