[Patches] Expanded group memberships on boundaries of outgoing trusts (bugs #13299, #13300, #13307)

Douglas Bagnall douglas.bagnall at catalyst.net.nz
Thu Mar 1 08:16:33 UTC 2018


On 01/03/18 20:05, Stefan Metzmacher via samba-technical wrote:
> Hi Douglas,
> 
>>> I'll aim this at the perf testing rig,
>>
>> The results are shown in the attached chart, which compares
>> yesterday's version of the patchset with yesterday's origin/master
>> over three runs.
>>
>> Adding large groups (i.e. adding a group with lots of members in a
>> single SamDB.add_ldif() operation) is slower, as is populating an
>> existing group using add_ldif. Other operations are the same speed or
>> slightly faster.
>>
>> There is quite a bit of noise when testing with just 3 runs, but the
>> consistency across the different tests suggests this is a real change.
> 
> Thanks! I hope the new version doesn't have such an impact.
> 
>> I will try again with thew current head of 
>> https://git.samba.org/?p=metze/samba/wip.git;a=shortlog;h=refs/heads/master3-trusts-ok
> 
> The one that I recently pushed should compile and work.

We should have a graph for that shortly.

> Can you explain how you produced this?

We have a box (unfortunately with a quite old AMD chip) that does
nothing but run performance tests from selftest/perf_tests.py, usually
(as in this case) just the ones in
source4/dsdb/tests/python/ad_dc_medley_performance.py. 

We can submit commit IDs and the number of runs through a little web
interface, and it churns through collecting the test times for each
run, then compiles the numbers, draws the picture, and send us an
email. (it's all somewhere in
http://git.catalyst.net.nz/gw?p=samba-cloud-autobuild.git).

To understand the exact thing being tested you really need to look at
ad_dc_medley_performance.py, which pretends to be a series of
unittests but carefully doesn't clean up after itself so it builds up
to 6000 users and several groups of various sizes.

Of course, when considering this you need to combine the usual caveats
about micro-benchmarks with the usual caveats about the unreality of
autobuild and the usual caveats about testing on ancient hardware and
the usual caveats about software written by me. Additionally, when we
choose which of the runs to use we take the quickest for each test,
which is a good practice for CPU bound tests but I think a bit crude
for this kind of thing involving multiple processes and databases.

> Thanks!
> metze

You are welcome.

Douglas



More information about the samba-technical mailing list