abartlet at samba.org
Fri Jan 12 19:30:58 UTC 2018
On Sat, 2018-01-13 at 07:49 +1300, Andrew Bartlett via samba-technical
> G'Day All.
> I ran some tests overnight as promised.
> The first thing to say is that we (sadly) need to drop Douglas'
> visualisation patches. There are some python errors in the error cases
> which show up only at the end of a full run (because the DB has junk in
> it) that are not handled.
Once I started looking at branches, I see a workaround for that has
been written. Thanks!
> Then I think we need to run tests on less than this full branch.
> I'll try:
> - master plus the flapping additions
> - metze's branch minus Douglas' patches
> - asn's branch with the flapping additions (but not whoami)
I am building (x4) in the Catalyst Cloud:
39 sec ago master-and-flapping shortlog | log | tree
11 min ago no-winbind-for-4.8 shortlog | log | tree
13 min ago no-catalyst-for-4.8 shortlog | log | tree
18 min ago catalyst-for-4.8 shortlog | log | tree
35 min ago asn-whoami shortlog | log | tree
Plus metze's new autobuild tree.
I'll post some results later today.
> We historically have always got into a muddle when we combine
> everybody's patch into one push, it feels like it would save time but
> actually takes longer: This is because it assumes that all the patches
> work, and for example I've put in good, tested code that failed, but
> should have just failed its own autobuild, not held up yours.
> For master, I think some builds with just the flapping tests marked
> would be good, then put that in. Then do the rest by topic, owned by
> the author.
> In the medium term, Jamie (one of my new developers at Catalyst) is
> working to untangle our testsuite inter-dependences. The aim here is
> to find sets of tests that:
> - are reliable
> - do no depend on each other
> - consume < 4GB of RAM
> - take less than 1 hour
> (And then to split these into parallel test environments)
> At Catalyst, running cloud builds for test is quite normal, often
> before posting and generally before pushing. But I've noticed that
> even for me that the closer I get to the release deadline, the less
> likely I am to wait for a full 5 hour build for the absolute final
> patch. I'm more likely to do what I did with the talloc patch: trust
> earlier tests on different code and the newly written tests and aim at
> What I would like to get to is a norm where when posting patches for
> review, we post them to (say) gitlab by habit, and by the time they are
> reviewed a clear 'passed/failed' flag is shown so we don't waste time
> on patches that won't pass.
> In the meantime I'll run our 5-hour testsuite a few more times in hope
> of getting the data on what can safely land for 4.8.
> Andrew Bartlett
Andrew Bartlett http://samba.org/~abartlet/
Authentication Developer, Samba Team http://samba.org
Samba Developer, Catalyst IT http://catalyst.net.nz/services/samba
More information about the samba-technical